Investigation and analysis by me (Jim Hopper, PhD) and another researcher hired as an independent consultant (Allison Tracy, PhD) have revealed that this paper has major problems that call into question its scientific validity:1. Unreported problems that invalidate the latent class trajectory analyses, e.g., their model is under-identified and under-powered, and has unacceptably low rates of classification certainty (as low as 51% for the "derivation" dataset and 73% for the "validation" dataset).2. False statement about their analyses. In the paper it is stated that they did not use the (minimal) senior year data in their latent trajectory analysis, when in fact they did, as can be seen from their own Mplus code and by comparing its results with those reported in the paper. This inclusion was not an oversight: the output from that model matches the fit statistics given in the article (but not perfectly), and the plots produced by the syntax match the graphs in the article's figure. And the impact of including the additional data is not trivial: omitting the senior year data resulted in statistically significant evidence that the method used to handle missing data was inappropriate and may have biased the results. In addition, when the senior year data actually were omitted, not only was the model (again) under-identified and under-powered, but the resulting latent class trajectories no longer included a "decreasing" group.3. Problems with the integrity and validity of their data, including (a) incorrect values resulting from recoding values for "missing" and "no response" into "never [raped]," (b) about 110 erroneous missing values for the sophomore year rape variable and about 70 erroneous missing values for the junior year rape variable, and (c) other mismatches between the data produced by Swartout's own syntax and the analysis dataset he used and provided. These problems are much more extensive, and much more significant, than what is mentioned in the correction published in December of 2015 (see http://archpedi.jamanetwork.com/article.aspx?articleid=2449663).Also, the authors operationalized "serial rape" in questionable ways that (even if their model and data were valid) call into questions their conclusions, including:1. Exclusion of data on attempted rape. The difference between completed and attempted rape is often just a matter of luck. Attempted rape is still a crime and experiences of attempted rape can be quite traumatic to victims.2. Exclusion of data on the frequency of rapes (and attempted rapes) self-reported by student participants at each time period. For example, does it really make sense to define someone who reports having committed more than 2 rapes, or even more than 5 rapes (within an 8- or 12-month period), as not a serial rapist?3. Assumption that men who reported having committed more than one rape during one assessment period were equally honest about whether or not they had committed rape(s) on subsequent surveys about the other years. We are aware of no evidence that men who complete the Sexual Experiences Survey every year for several years are equally honest in their responses over subsequent years, and there are good reasons to believe that, especially for men who commit multiple rapes, this is not the case. Yet this assumption is central to the authors' new, restrictive definition of "serial rape" as having reported committing rape in more than one assessment period (or year of college).4. Use of latent class trajectory analysis to address what they refer to as the "serial rapist assumption," which is actually two simple propositions: the majority of rapists are serial rapists, and the vast majority of rapes are committed by serial rapists.We have prepared comprehensive documentation to support and flesh out the claims above:1. An Initial Critical Response to Swartout et al.'s (2015) paper in JAMA Pediatrics, "Trajectory Analysis of the Campus Serial Rapist Assumption," by Jim Hopper, David Lisak, & Allison Tracy. (http://www.jimhopper.com/swartout/Swartout_Critique.pdf) This document includes:
Being assigned to a "latent class" of "increasing," "decreasing" or "low/time-limited" rape -- based on potentially flawed data and flawed statistical modeling -- is NOT the same as ACTUALLY BEING a man who had a pattern of increasing, decreasing or low/time-limited rape over time. (That includes when "rape" is reduced to a yes-or-no variable that obscures serial rape.)
If Swartout et al. would provide the "derivation dataset" with the R (rape) variables AND the ID numbers matching those in the public dataset, everyone could see just how many rapes and attempted rapes were committed, and when, by each of the men that their complex (and invalid) analysis assigned to the categories of "increasing," "decreasing" and "low/time-limited" rape. No knowledge of complex statistics is necessary; Swartout and colleagues need only provide the ID numbers for the subjects in the derivation dataset they used for their analyses and a variable indicating the "latent class" to which each subject was assigned by their analysis.
Similarly, with sufficient data from co-author Martie Thompson's "validation dataset," any researcher with rudimentary knowledge of statistics can conduct the same analyses described above and see just how many rapes and attempted rapes were committed, and when, by each of the men assigned to the "increasing," "decreasing" and "low/time-limited" rape categories. Critically, this would NOT involve disclosure of information in any way that is inconsistent with ethical management of such data, just as there are no such problems with the long-available public version of the "derivation dataset" of which co-author Jacquelyn White is the principle investigator.
"All of this is a troubling development for science, a field that wants the public to believe that transparency is one of its guiding principles. We'd like to believe that, too, but when researchers refuse to share data, and how they came up with it, they lose the right to call what they do science. The ability of other researchers -- including competitors -- to try to poke holes in an analysis is a bedrock of the scientific method" (bold added; http://www.statnews.com/2015/12/23/sharing-data-science/).
Copyright © 2017 PubPeer Foundation