Anatomy of a Scary Statistic

One of the things that I find interesting is the power and resonance a particularly statistic can have on a person based on their preexisting beliefs.  There are two dimensions to this that I’d like to explore over a couple of posts.  The first, and the subject of this particular post, is how a single statistic be used as be used as rather convincing evidence to bolster two opposite positions.

Before we get to the statistic I am referring to, let’s start with the background.  Recently I watched Katie Couric’s new gun documentary, Under The Gun.  For those not aware, there is some controversy surrounding the film.  Essentially, Couric asks members of the Virginia Citizens’ Defense League the following question, and then close to 10 seconds of silence and blank looks were edited in giving the impression that it is a powerful question that they are unable to answer:

If there are no background checks for gun purchasers, how do you prevent felons or terrorists from purchasing a gun?

In reality, the members immediately come up with answers.  Setting aside the ethics of the deceptive edit, one of the benefits of cutting the scene that way is that it allows that question to linger in the viewer’s mind uncontested – What would happen if there is nothing to stop these dangerous individuals from buying guns?  Later in the film they revisit part of this question with a narrative that, if taken at face value, is meant to be absolutely terrifying:

One of the issues that’s come up a lot in the last few years is that anyone who is on the terror watchlist is not prohibited from buying a gun

They then show a narrator asking a series of NRA members how they feel about a person on the government’s terror watchlist not being able to board a plane, but being able to legally buy a gun1.  Of course, the responses they show are from people who are dumbfounded by the question.  After all, what possible response could there be to why extremely dangerous individuals should be able to legally buy deadly weapons?  Finally, set up complete, they hit you with the statistic:

From 2004 – 2014 over 2,000 terror suspects legally purchased guns in the United States

Read more

  1. It should be noted at this point that the No Fly List is only one component of the larger Terrorist Screening Database so not everyone on the terror watchlist would actually be prohibited from boarding a plane

Professor Twitter and the Problem of the Low False Rape Narrative

Yesterday1 I saw an interesting argument taking place in my twitter feed and couldn’t let it go without comment2.  For the purposes of this post, we are going to refer to the twitter author in question as Professor Twitter.  So how did this all start?  Professor Twitter posted the following:

MRA @reddit thread encourages men to record sex to avoid false rape allegations

1/2 Sigh: 1. Recording sex w/o consent is a sex crime; 2. recording sex doesn’t prove it’s not rape

At this point I agree with Professor Twitter’s sentiment – after all, this is a plainly ridiculous suggestion.  Then, The Professor follows up:

2/2  3. Statistical likelihood of being falsely accused of rape is infinitesimal

This, of course, is where the arguments started.  Scott tweeted suggesting that perhaps a claim such as that should come with some sort of citation (which is how I first saw what was going on), and soon others followed, asking for The Professor’s source.  Now, the prevalence of false rape accusations is not Professor Twitter’s areas of expertise, rather The Professor brought it up more as a tangential point.  However what I found interesting, and worthy of a post, is where the argument went next:

Upper-end estimate of unfounded (not false) rape claims is 8%; vast majority of rapes never reported at all http://www.huffingtonpost.com/2014/12/08/false-rape-accusations_n_6290380.html …

The Huffington Post article mentions three US data points on false reporting all of which I have previously discussed: The Lisak study, the Lonsway article, and FBI data.  While the FBI data is indeed unfounded instead of false, the same cannot be said of Lisak and Lonsway’s studies.  Both Lonsway’s 8% as well as Lisak’s 10% upper ends dealt specifically with false reporting.  For whatever reason though, Professor Twitter is trying to imply that the top ranges all deal with unfounded rates 3.

Professor Twitter then began to receive criticism that the sources in question, while coming up with seemingly low rates, didn’t appear to support a classification of “infinitesimal.”  The Professor responded with:

More than half of rapes never reported, so even higher % means extremely small statistical likelihood

Oh look, the false rape claim truthers have arrived & they think I care about their math-challenged ideology

There is just one problem with Professor Twitter’s argument – it happens to make no mathematical sense.  Unreported rapes have absolutely no effect on the statistical likelihood of being falsely accused.  It doesn’t matter if there are 10 million unreported rapes a year or 0, the probability of being falsely accused is exactly the same.  Why is this the case? The probability of being falsely accused is dependent on the number of false accusations that occur and the size of the potentially effected population.  Whether false accusations are 8/100 (8%) or 8/1,000,000 (0.0008%), the number of accusations (8 in this example), and thus the probability of being falsely accused, is unchanged4.

Read more

  1. I’ll be honest, the only reason I can get a timely post out is because the bulk of this was already written.  I’ve already posted about false rape allegations quite a bit and I planned to save this one for down the road, but then Twitter came along and wrecked that plan so here you go
  2. https://xkcd.com/386/
  3. That doesn’t seem very professor like
  4. There is a certain delicious irony to ridiculing your critic’s supposed “math-challenged ideology” while simultaneously demonstrating a complete lack of understanding of basic probabilities

What do we know about false rape allegations?

Recently1 Scott Greenfield over at Simple Justice sent me a tweet asking my thoughts on Dara Lind’s article on false rape allegations.  After taking a quick read through, I was fairly dismissive of it.  Much to her credit, upon seeing the twitter exchange between myself and Scott, Dara immediately corrected the error in her piece.  She also made herself available for any questions I had while writing this post, so you will see some her responses included where appropriate2.  While I disagree with some of the conclusions she came to in her article, I was really impressed with the way she conducted herself and she has certainly earned my respect.

So what is the main conclusion I disagree with?

For one thing, research has finally nailed down a consistent range for how many reports of rape are false: somewhere between 2 and 8 percent, which is a lot narrower than the 1.5 percent to 90 percent range of the past.

My first problem with this is that there aren’t any US studies that I am aware of that actually use the 2-8% range.  The only place I’ve seen that range used is in the The Voice article which, as I’ve previously discussed, isn’t exactly peer-reviewed research.  Even Lisak, who is a listed author of The Voice article says the range is wider at 2-10%. I asked Lind about this and here is her response:

Q2: In the article you state “For one thing, research has finally nailed down a consistent range for how many reports of rape are false: somewhere between 2 and 8 percent” and have a section heading of “A growing consensus: between 2 and 8 percent of allegations.” In your research did you find other authors coming up with that range besides Lonsway herself or when referencing the 2009 The Voice article?

A: To answer questions 2 and 53: I almost certainly relied too much on Lonsway, between the interview I conducted with her and her response to Lisak.

I also asked about how heavily she weighed the relative importance of the various studies she researched:

Q4: When arriving at your conclusions, how heavily did you weigh recency and location (or perhaps the better way to phrase – how much credence did you give to studies done outside the US or more than 20 years ago)?

A:To answer question 4: Strong bias for recency, little if any bias for US-based (I was working off Rumney, after all) but some. I did try to make the paucity of recent US-based studies clear in the article.

Here is another area where I disagree with her – I don’t see why there would be some sort of universal rate of false reporting worldwide, so the international studies aren’t particularly meaningful to me 4 .  Once you strip out the international studies and the studies over 30 years old, all we are really left with is the MAD study and the Lisak study.  In previous posts I’ve detailed many of the problems with the MAD study, but here are some of the highlights:

  • Study was conducted by End Violence Against Women International, an organization that can hardly claim to be unbiased in regard to the prevalence of false rape reports
  • Prior to the study Joanne Archambault, the executive director of End Violence Against Women International, expressed her opinion that the real rate of false reporting was 4% 5
  • The communities studied were not a random sample, but rather had to apply for the study and were then chosen by a selection committee
  • Despite the data collection period being from 2005-2006, the study results have yet to be published in any peer-reviewed journal 6
  • Reports could only be classified as false after a “thorough, evidence-based investigation.”  However, such an investigation isn’t really possible if you follow EVAW International’s training materials which discourage asking too many questions for fear of receiving inconsistent responses 7 and suggested stopping the investigation if things seemed off 8

Read more

  1. Ok, so not particularly recently, but getting a post up in under a month is about as quick as things get over here
  2. To start with, she mention the following: “I want to be upfront that these are just my views and not the views of my site, editors, etc. I own my errors.”
  3. Question 5 was “When evaluating the studies, did you have any concerns about bias on the part of the authors?”
  4. Though if you disagree, those international studies are going to fall victim to the same pitfalls as I discuss in the next section
  5. Surprisingly, when the boss tells you what they think the answer to a question is, people tend to arrive at similar conclusions
  6. Lisak’s citation for the study actually refers to it being from an “Unpublished manuscript”
  7. “The purpose of any follow-up interviews should therefore be to gather additional information and clarify any questions, not to go over the same information again.”
  8. “Given the size of the caseload that most investigators and prosecutors handle, it seems difficult to justify the inordinate time that would be involved in investigating and prosecuting someone for filing a false report—given that it is typically only a misdemeanor offense.” and “While it is understandable that investigators might want to prove that the report is false out of a sense of frustration and a determination to get to the truth, this is probably not the best use of limited resources.”

Narrative in search of a statistic

I originally intended to cover a few more topics before coming back sexual assault statistics, but with the the release of the Columbia Journalism Review report and the Nungesser lawsuit (as well as mattress graduation) since my last post, I had a few more post topics.  The first of which deals with problem of trying to find statistics that fit a particular narrative.

By and large, I thought the CJR report was quite detailed and it seems like they certainly did their homework.  There is, however, one ironic little tidbit in it.  For a piece in which they rather thoroughly document Rolling Stone’s seeming inability to fact check, the authors themselves screw up their fact checking.  Here is the relevant section:

Erdely and her editors had hoped their investigation would sound an alarm about campus sexual assault and would challenge Virginia and other universities to do better. Instead, the magazine’s failure may have spread the idea that many women invent rape allegations. (Social scientists analyzing crime records report that the rate of false rape allegations is 2 to 8 percent.) At the University of Virginia, “It’s going to be more difficult now to engage some people … because they have a preconceived notion that women lie about sexual assault,” said Alex Pinkleton, a UVA student and rape survivor who was one of Erdely’s sources.

The problem being that if you click their citation you will find no such reference to a 2-8% range.  Rather, the linked paper lists a 2-10% range.  As readers will know, the 2-8% range actually comes from here1.  I get the impression that they had a certain narrative that they wanted to tell and went in search of the statistic they could drop in.  They mixed their sources up because, at the end of the day, they weren’t really reading the studies.  Had they done so, they might have realized that both studies represent little more than a floor on the false reporting rates rather than an actual range.  That didn’t matter though.  They knew what they wanted to write and just needed a story statistic to slot in – any old one would do.

Read more

  1. For fun, feel free to ask yourself why Lisak would be an author of a paper that stated a 2-8% range in 2009 and then switch it up a year later to 2-10% despite no knew data to justify the increase

The Lost Art of Fact Checking

If you were watching ABC’s Nightline this past weekend, you would have seen this segment on the new campus sexual assault documentary The Hunting Ground.  About 4 minutes into the segment, you would have gotten to the supposed FBI statistic I discussed in How To Lie And Mislead with Rape Statistics: Part 1.  Here is a transcriptions of the relevent part:

Annie Clark: It’s hard because there is such a stigma attached.  It is the only crime where we blame the victim.

Amy Robach (ABC News): And, there’s a notion out there, that there’s a high level of false reporting when it comes to rape. 

Annie Clark: The false reporting rate on sexual assault is 2-8% according to the FBI, which is the same rate, if not lower, than any other crime.

If you’ve read my linked blog post on the subject, you know that there is simply no FBI statistic that finds a 2-8% false reporting rate for rape.  In 1995, 1996, and 1997 the FBI did publish an 8% false reporting rate, but rather than being “the same rate, if not lower, of any other crime” here is what they actually found1:

1995

The “unfounded” rate, or percentage of complaints determined through investigation to be false, is higher for forcible rape than for any other Index crime. In 1995, 8 percent of forcible rape complaints were “unfounded,” while the average for all Index crimes was 2 percent.

1996

The “unfounded” rate, or percentage of complaints determined through investigation to be false, is higher for forcible rape than for any other Index crime. Eight percent of forcible rape complaints in 1996 were “unfounded,” while the average for all Index crimes was 2 percent.

1997

A higher percentage of complaints of forcible rape are determined “unfounded,” or found by investigation to be false, than for any other Index crime. While the average of “unfounded” rates for all Crime Index offenses was 2 percent in 1997, 8 percent of forcible rape complaints were “unfounded” for the same timeframe.

While this isn’t quite as bad as if Robach had used this statistic, it still represents pretty poor journalism in my eyes.  This was a pre-taped interview and Robach clearly knew ahead of time that they were going to bring up this stat as sets up Clark to deliver her line .  How long would it take a professional fact checker to find out that not only did the FBI statistic in question not exist, but the real statistic made rather the opposite point? 5 minutes? 10? Also of note is the fact that approximately 15 seconds before the above exchange occurred, Robach herself used a different sexual assault statistic2, so we know that ABC staff was already researching sexual assault statistics. Particularly in an era where it seems the media feels the needs to present even the most clear cut issues as though there are two equal sides, is it too much to ask a major news organization to simply point out when one their guests makes an argument based on a nonexistent statistic?

I have reached out to ABC News in order to obtain their policy on fact checking statistics used by guests during pre-taped interviews, but have yet to receive a response3.

  1. From Section II of the 1995, 1996, and 1997 Crime in the US reports
  2. The 20% sexual assault reporting rate for students that is found in this report
  3. Both of the email addresses I was able to find were inoperable, so I had to use the contact form available on their website.  Additionally I reached out to their two listed social medial contacts on Twitter, but did not receive a response there either

A Cut Too Deep: A case study in bias

If I were to create Francis Walker’s Rules of Using Statistics, Rule #1 would probably be this:

1. You are not allowed to use a statistic in an argument unless you have read the underlying research

This rule would also have a sub-part:

1a. Do not skip from the “Introduction” section to the “Results” section, you must also read the “Methods” section

In each of the posts I’ve written so far, bias has played a role, and by reading the “Methods” section of research you can get an understanding of what type of bias might be influencing a particular research study.  The other week I read an article on Ars Technica that reminded me of the first time I realized just how critical the “Methods” section really was.  What was said Ars article about? Digital Security? Space Travel? General Technology, perhaps?  Nope – it was about circumcision.

Read more

Recognizing bias

This post will not deal with statistics (I know, 3 posts in and I’m already breaking with my proposed theme), though it does have to do with one of the reasons I think we let bad stats persuade us. Once we make our mind up about an issue, it seems all too easy to just take anything that conforms to our view of reality as true without any real verification.  If I intend to blog about others drawing dubious conclusions, I figure it is only fair that I point out when I do it as well.

It should surprise no one that this started with social media.  Yesterday I saw a post or tweet linking to a story about Idaho representative Vito Barbieri asking if a woman could swallow a camera for a remote gynecological exam.  Here is an example of one such article1. I fell for it hook, line, and sinker.

Read more

  1. I believe the original article I saw was here, however it has since been updated to include some more relevant details

How To Lie And Mislead With Rape Statistics: Part 2

In Part 1 we talked about the reasons the authors felt some of the existing research was not credible.  They wrapped up their critique of the Kanin study with the following quote (and if you read Part 1, you can probably guess who they are going to cite):

As a result of these and other serious problems with the “research,” Kanin’s (1994) article can be considered “a provocative opinion piece, but it is not a scientific study of the issue of false reporting of rape. It certainly should never be used to assert a scientific foundation for the frequency of false allegations” (Lisak, 2007, p. 1).

In contrast, when more methodologically rigorous research has been conducted, estimates for the percentage of false reports begin to converge around 2-8%.

Before I get into the primary study that they reference, let me give you an idea of just what kind of games they have in store for us.  Here is what they say about the second study used to back up their 2-8% range:

For example, Clark and Lewis (1977) examined case files for all 116 rapes investigated by the Toronto Metropolitan Police Department in 1970. As a result, they concluded that seven cases (6%) involved false reports made by victims.

Ok, so we are back to using data from a single police department, in a single year, and in this case from about 40 years prior to when the article was written, but at least it is in the range they are claiming. They then go on to say this

Read more

How To Lie And Mislead With Rape Statistics: Part 1

As it turns out, only 7.8% of rape reports are true.

I know that may seem hard to believe, but I didn’t just make it up.  Technically, it is completely true.  It is also completely horse shit.  It is so misleading and built upon so many undisclosed caveats, that most people would consider it as good as lying if they knew how it was actually derived. The thing is, that “only 2-8% of rape allegations turn out to be false” figure you may have heard?  Not only is it just as misleading (if not more), it actually comes from the exact same data set.

So where did the 2-8% number come from?  If you read about it in Zerlina Maxwell’s piece in the Washington post you might be reasonably confused that the range is from the FBI

In fact (despite various popular myths), the FBI reports that only 2-8 percent of rape allegations turn out to be false, a number that is smaller than the number (10 percent) who lie about car theft.

Maxwell linked to a 2009 paper that hypothesized the 2-8% range, but has nothing to do with FBI statistics or car thefts. However, there is an earlier version of the 2009 article that was published in 2007 with the same name that included this bit of data:

For example, the Portland, Oregon police department examined 431 complaints of completed or attempted sexual assaults in 1990, and found that 1.6%1 were determined to be false. This was in comparison with a rate of 2.6% for false reports of stolen vehicles (Oregon Attorney General’s Office, 2006).

This is from a single police department, in a single year, and is not an FBI statistic.  As it so happens, there is a relevant statistics from the FBI, but instead of 1.6%, it is 8%, which would mean it wouldn’t fit her point about stolen vehicles. On the other hand,  there is also a separate statistic2 that says about 10% of stolen vehicle reports are false.  OK, so now we have a source for the 2-8% range, an FBI estimate that is in that range, and another statistic that shows a higher percentage of car thefts.  So while she is confusing her sources a little bit and that makes what she is saying somewhat misleading, overall her point is accurate, right?

Read more

  1. This stat was from version 2 of the Oregon SART handbook.  When they updated this stat 12 years later in 2002, as described in version 3, they found it had nearly doubled to 3%
  2. Mentioned at the bottom of page 5 here, though original source is unknown