I originally intended to cover a few more topics before coming back sexual assault statistics, but with the the release of the Columbia Journalism Review report and the Nungesser lawsuit (as well as mattress graduation) since my last post, I had a few more post topics. The first of which deals with problem of trying to find statistics that fit a particular narrative.
By and large, I thought the CJR report was quite detailed and it seems like they certainly did their homework. There is, however, one ironic little tidbit in it. For a piece in which they rather thoroughly document Rolling Stone’s seeming inability to fact check, the authors themselves screw up their fact checking. Here is the relevant section:
Erdely and her editors had hoped their investigation would sound an alarm about campus sexual assault and would challenge Virginia and other universities to do better. Instead, the magazine’s failure may have spread the idea that many women invent rape allegations. (Social scientists analyzing crime records report that the rate of false rape allegations is 2 to 8 percent.) At the University of Virginia, “It’s going to be more difficult now to engage some people … because they have a preconceived notion that women lie about sexual assault,” said Alex Pinkleton, a UVA student and rape survivor who was one of Erdely’s sources.
The problem being that if you click their citation you will find no such reference to a 2-8% range. Rather, the linked paper lists a 2-10% range. As readers will know, the 2-8% range actually comes from here1. I get the impression that they had a certain narrative that they wanted to tell and went in search of the statistic they could drop in. They mixed their sources up because, at the end of the day, they weren’t really reading the studies. Had they done so, they might have realized that both studies represent little more than a floor on the false reporting rates rather than an actual range. That didn’t matter though. They knew what they wanted to write and just needed a
story statistic to slot in – any old one would do.
Those asserting the false reporting rate to be low are hardly alone in committing this sin though. In fact, I see it all the time from people trying to claim the Kanin study is the benchmark study on the topic. Don’t get me wrong, Lisak’s critique of the Kanin study is pretty awful, but that doesn’t mean the Kanin study should be taken as at all representative of the country of the whole. Since I’ve already spent quite a number of words looking at the studies on the low end of the spectrum, I figured it was time to turn a critical eye on the other side.
So why might we want to question the conclusions of the Kanin study? Let’s start with the issue of bias. Just as I take issue with the fact that Lonsway and Archambault were anchored to a 2-4% range before they did their study, we have to examine whether Kanin might himself be anchored to a particular view on the matter as well. The part of the Kanin study that jumped out to me the most was the following line:
Widely divergent viewpoints are held regarding the incidence of false rape reporting (Katz and Mazur, 1979). For example, reports set the figure from lows of 0.25% (O’Reilly, 1984) and 1% (Krasner et aL, 1976) to highs of 80-90% (Bronson, 1918; Comment, 1968) and even 100% (see Kanin, 1985).
And here is the corresponding listing for that study in the references section:
Kanin, E. J. (1985). Unfounded rape. Paper presented at the Academy of Criminal Justice
Sciences annual meeting, Las Vegas, NV.
So 9 years before this study, Kanin did another study that found a 100% false reporting rate? 100%? Unfortunately, I have been unable to find a copy of that paper in order to take a deeper look, but something doesn’t seem right there. His earlier research results certainly call into question how much we can rely on Kanin to be unbiased on the issue.
Next we have the issue of sampling. While the study deals with 9 years worth of data, it is only looking at a single police agency (we’ll deal with the addendum in a bit). The demographics of that area are very important if we want to know how representative the findings might be for a larger population. So what do we know about the it? 1) It is somewhere in the Midwest, and 2) It is in a small metropolitan area (population of about 70,000). That’s it. Has the police department had issues with sexual assault reports that might of influenced the results? Who knows. Based on the extremely limited information we have, there is no reasonable basis to apply these results to any other situations.
Perhaps the most important issue to address is one of timeliness. The Kanin study tracked rape complaints over the period of 1978-1987, so at this point the data we are looking at describes a time 28-37 years ago. This is particularly significant in regards to the primary reason the study found for the filing of false complaints:
Of the 45 cases of false charges, over one-half (56%, n = 27) served the complainants’ need to provide a plausible explanation for some suddenly foreseen, unfortunate consequence of a consensual encounter, usually sexual, with a male acquaintance.
The study goes on to specify that the reason an alibi is needed typically has to do with pregnancy:
The above cases are prototypical cases where the fear of pregnancy is paramount in motivating the rape charge. This theme is constant, only the scenario changes in that the lover is black, the husband is out of state on a job, the husband had a vasectomy, the condom broke. Only three cases deviated from this tradition.
The fact of the matter is that contraceptive usage today is much different than it was during the time period of the Kanin study, particularly with the contraceptive coverage provisions of the Affordable Care Act. Without the fear of unwanted pregnancy, a huge chunk of the false rape complaints in the study would not have happened. This is one of the biggest reasons to question the validity of Kanin when used today.
The Kanin study does contain an addendum that talks about additional findings from two Midwestern universities, but that encompasses a scant two paragraphs. We have even less information about how this study was conducted than we do about the original, though once again we find that over half of the false rape complaints fulfilled the “alibi” function.
Overall, while the Kanin study can provide an interesting counterpoint to the “only 2-8%” narrative, grounding your view that false complaints are extremely prevalent in a single, extremely dated study is just as foolish as the logic employed by those who swear by the other studies I have covered.
- For fun, feel free to ask yourself why Lisak would be an author of a paper that stated a 2-8% range in 2009 and then switch it up a year later to 2-10% despite no knew data to justify the increase ↩