Professor Twitter and the Problem of the Low False Rape Narrative

Yesterday1 I saw an interesting argument taking place in my twitter feed and couldn’t let it go without comment2.  For the purposes of this post, we are going to refer to the twitter author in question as Professor Twitter.  So how did this all start?  Professor Twitter posted the following:

MRA @reddit thread encourages men to record sex to avoid false rape allegations

1/2 Sigh: 1. Recording sex w/o consent is a sex crime; 2. recording sex doesn’t prove it’s not rape

At this point I agree with Professor Twitter’s sentiment – after all, this is a plainly ridiculous suggestion.  Then, The Professor follows up:

2/2  3. Statistical likelihood of being falsely accused of rape is infinitesimal

This, of course, is where the arguments started.  Scott tweeted suggesting that perhaps a claim such as that should come with some sort of citation (which is how I first saw what was going on), and soon others followed, asking for The Professor’s source.  Now, the prevalence of false rape accusations is not Professor Twitter’s areas of expertise, rather The Professor brought it up more as a tangential point.  However what I found interesting, and worthy of a post, is where the argument went next:

Upper-end estimate of unfounded (not false) rape claims is 8%; vast majority of rapes never reported at all http://www.huffingtonpost.com/2014/12/08/false-rape-accusations_n_6290380.html …

The Huffington Post article mentions three US data points on false reporting all of which I have previously discussed: The Lisak study, the Lonsway article, and FBI data.  While the FBI data is indeed unfounded instead of false, the same cannot be said of Lisak and Lonsway’s studies.  Both Lonsway’s 8% as well as Lisak’s 10% upper ends dealt specifically with false reporting.  For whatever reason though, Professor Twitter is trying to imply that the top ranges all deal with unfounded rates 3.

Professor Twitter then began to receive criticism that the sources in question, while coming up with seemingly low rates, didn’t appear to support a classification of “infinitesimal.”  The Professor responded with:

More than half of rapes never reported, so even higher % means extremely small statistical likelihood

Oh look, the false rape claim truthers have arrived & they think I care about their math-challenged ideology

There is just one problem with Professor Twitter’s argument – it happens to make no mathematical sense.  Unreported rapes have absolutely no effect on the statistical likelihood of being falsely accused.  It doesn’t matter if there are 10 million unreported rapes a year or 0, the probability of being falsely accused is exactly the same.  Why is this the case? The probability of being falsely accused is dependent on the number of false accusations that occur and the size of the potentially effected population.  Whether false accusations are 8/100 (8%) or 8/1,000,000 (0.0008%), the number of accusations (8 in this example), and thus the probability of being falsely accused, is unchanged4.

Read more

  1. I’ll be honest, the only reason I can get a timely post out is because the bulk of this was already written.  I’ve already posted about false rape allegations quite a bit and I planned to save this one for down the road, but then Twitter came along and wrecked that plan so here you go
  2. https://xkcd.com/386/
  3. That doesn’t seem very professor like
  4. There is a certain delicious irony to ridiculing your critic’s supposed “math-challenged ideology” while simultaneously demonstrating a complete lack of understanding of basic probabilities

What do we know about false rape allegations?

Recently1 Scott Greenfield over at Simple Justice sent me a tweet asking my thoughts on Dara Lind’s article on false rape allegations.  After taking a quick read through, I was fairly dismissive of it.  Much to her credit, upon seeing the twitter exchange between myself and Scott, Dara immediately corrected the error in her piece.  She also made herself available for any questions I had while writing this post, so you will see some her responses included where appropriate2.  While I disagree with some of the conclusions she came to in her article, I was really impressed with the way she conducted herself and she has certainly earned my respect.

So what is the main conclusion I disagree with?

For one thing, research has finally nailed down a consistent range for how many reports of rape are false: somewhere between 2 and 8 percent, which is a lot narrower than the 1.5 percent to 90 percent range of the past.

My first problem with this is that there aren’t any US studies that I am aware of that actually use the 2-8% range.  The only place I’ve seen that range used is in the The Voice article which, as I’ve previously discussed, isn’t exactly peer-reviewed research.  Even Lisak, who is a listed author of The Voice article says the range is wider at 2-10%. I asked Lind about this and here is her response:

Q2: In the article you state “For one thing, research has finally nailed down a consistent range for how many reports of rape are false: somewhere between 2 and 8 percent” and have a section heading of “A growing consensus: between 2 and 8 percent of allegations.” In your research did you find other authors coming up with that range besides Lonsway herself or when referencing the 2009 The Voice article?

A: To answer questions 2 and 53: I almost certainly relied too much on Lonsway, between the interview I conducted with her and her response to Lisak.

I also asked about how heavily she weighed the relative importance of the various studies she researched:

Q4: When arriving at your conclusions, how heavily did you weigh recency and location (or perhaps the better way to phrase – how much credence did you give to studies done outside the US or more than 20 years ago)?

A:To answer question 4: Strong bias for recency, little if any bias for US-based (I was working off Rumney, after all) but some. I did try to make the paucity of recent US-based studies clear in the article.

Here is another area where I disagree with her – I don’t see why there would be some sort of universal rate of false reporting worldwide, so the international studies aren’t particularly meaningful to me 4 .  Once you strip out the international studies and the studies over 30 years old, all we are really left with is the MAD study and the Lisak study.  In previous posts I’ve detailed many of the problems with the MAD study, but here are some of the highlights:

  • Study was conducted by End Violence Against Women International, an organization that can hardly claim to be unbiased in regard to the prevalence of false rape reports
  • Prior to the study Joanne Archambault, the executive director of End Violence Against Women International, expressed her opinion that the real rate of false reporting was 4% 5
  • The communities studied were not a random sample, but rather had to apply for the study and were then chosen by a selection committee
  • Despite the data collection period being from 2005-2006, the study results have yet to be published in any peer-reviewed journal 6
  • Reports could only be classified as false after a “thorough, evidence-based investigation.”  However, such an investigation isn’t really possible if you follow EVAW International’s training materials which discourage asking too many questions for fear of receiving inconsistent responses 7 and suggested stopping the investigation if things seemed off 8

Read more

  1. Ok, so not particularly recently, but getting a post up in under a month is about as quick as things get over here
  2. To start with, she mention the following: “I want to be upfront that these are just my views and not the views of my site, editors, etc. I own my errors.”
  3. Question 5 was “When evaluating the studies, did you have any concerns about bias on the part of the authors?”
  4. Though if you disagree, those international studies are going to fall victim to the same pitfalls as I discuss in the next section
  5. Surprisingly, when the boss tells you what they think the answer to a question is, people tend to arrive at similar conclusions
  6. Lisak’s citation for the study actually refers to it being from an “Unpublished manuscript”
  7. “The purpose of any follow-up interviews should therefore be to gather additional information and clarify any questions, not to go over the same information again.”
  8. “Given the size of the caseload that most investigators and prosecutors handle, it seems difficult to justify the inordinate time that would be involved in investigating and prosecuting someone for filing a false report—given that it is typically only a misdemeanor offense.” and “While it is understandable that investigators might want to prove that the report is false out of a sense of frustration and a determination to get to the truth, this is probably not the best use of limited resources.”

Narrative in search of a statistic

I originally intended to cover a few more topics before coming back sexual assault statistics, but with the the release of the Columbia Journalism Review report and the Nungesser lawsuit (as well as mattress graduation) since my last post, I had a few more post topics.  The first of which deals with problem of trying to find statistics that fit a particular narrative.

By and large, I thought the CJR report was quite detailed and it seems like they certainly did their homework.  There is, however, one ironic little tidbit in it.  For a piece in which they rather thoroughly document Rolling Stone’s seeming inability to fact check, the authors themselves screw up their fact checking.  Here is the relevant section:

Erdely and her editors had hoped their investigation would sound an alarm about campus sexual assault and would challenge Virginia and other universities to do better. Instead, the magazine’s failure may have spread the idea that many women invent rape allegations. (Social scientists analyzing crime records report that the rate of false rape allegations is 2 to 8 percent.) At the University of Virginia, “It’s going to be more difficult now to engage some people … because they have a preconceived notion that women lie about sexual assault,” said Alex Pinkleton, a UVA student and rape survivor who was one of Erdely’s sources.

The problem being that if you click their citation you will find no such reference to a 2-8% range.  Rather, the linked paper lists a 2-10% range.  As readers will know, the 2-8% range actually comes from here1.  I get the impression that they had a certain narrative that they wanted to tell and went in search of the statistic they could drop in.  They mixed their sources up because, at the end of the day, they weren’t really reading the studies.  Had they done so, they might have realized that both studies represent little more than a floor on the false reporting rates rather than an actual range.  That didn’t matter though.  They knew what they wanted to write and just needed a story statistic to slot in – any old one would do.

Read more

  1. For fun, feel free to ask yourself why Lisak would be an author of a paper that stated a 2-8% range in 2009 and then switch it up a year later to 2-10% despite no knew data to justify the increase

How To Lie And Mislead With Rape Statistics: Part 2

In Part 1 we talked about the reasons the authors felt some of the existing research was not credible.  They wrapped up their critique of the Kanin study with the following quote (and if you read Part 1, you can probably guess who they are going to cite):

As a result of these and other serious problems with the “research,” Kanin’s (1994) article can be considered “a provocative opinion piece, but it is not a scientific study of the issue of false reporting of rape. It certainly should never be used to assert a scientific foundation for the frequency of false allegations” (Lisak, 2007, p. 1).

In contrast, when more methodologically rigorous research has been conducted, estimates for the percentage of false reports begin to converge around 2-8%.

Before I get into the primary study that they reference, let me give you an idea of just what kind of games they have in store for us.  Here is what they say about the second study used to back up their 2-8% range:

For example, Clark and Lewis (1977) examined case files for all 116 rapes investigated by the Toronto Metropolitan Police Department in 1970. As a result, they concluded that seven cases (6%) involved false reports made by victims.

Ok, so we are back to using data from a single police department, in a single year, and in this case from about 40 years prior to when the article was written, but at least it is in the range they are claiming. They then go on to say this

Read more

How To Lie And Mislead With Rape Statistics: Part 1

As it turns out, only 7.8% of rape reports are true.

I know that may seem hard to believe, but I didn’t just make it up.  Technically, it is completely true.  It is also completely horse shit.  It is so misleading and built upon so many undisclosed caveats, that most people would consider it as good as lying if they knew how it was actually derived. The thing is, that “only 2-8% of rape allegations turn out to be false” figure you may have heard?  Not only is it just as misleading (if not more), it actually comes from the exact same data set.

So where did the 2-8% number come from?  If you read about it in Zerlina Maxwell’s piece in the Washington post you might be reasonably confused that the range is from the FBI

In fact (despite various popular myths), the FBI reports that only 2-8 percent of rape allegations turn out to be false, a number that is smaller than the number (10 percent) who lie about car theft.

Maxwell linked to a 2009 paper that hypothesized the 2-8% range, but has nothing to do with FBI statistics or car thefts. However, there is an earlier version of the 2009 article that was published in 2007 with the same name that included this bit of data:

For example, the Portland, Oregon police department examined 431 complaints of completed or attempted sexual assaults in 1990, and found that 1.6%1 were determined to be false. This was in comparison with a rate of 2.6% for false reports of stolen vehicles (Oregon Attorney General’s Office, 2006).

This is from a single police department, in a single year, and is not an FBI statistic.  As it so happens, there is a relevant statistics from the FBI, but instead of 1.6%, it is 8%, which would mean it wouldn’t fit her point about stolen vehicles. On the other hand,  there is also a separate statistic2 that says about 10% of stolen vehicle reports are false.  OK, so now we have a source for the 2-8% range, an FBI estimate that is in that range, and another statistic that shows a higher percentage of car thefts.  So while she is confusing her sources a little bit and that makes what she is saying somewhat misleading, overall her point is accurate, right?

Read more

  1. This stat was from version 2 of the Oregon SART handbook.  When they updated this stat 12 years later in 2002, as described in version 3, they found it had nearly doubled to 3%
  2. Mentioned at the bottom of page 5 here, though original source is unknown