"Contextual Illusions Reveal the Limit of Unconscious Visual Processing"

Comments (9):

Display By:

Statcheck :

( September 15th, 2016 5:43am UTC )

Edit your feedback below

Using the R package statcheck (v1.0.1), the HTML version of this article was scanned on 2016-08-05 for statistical results (t, r, F, Chi2, and Z values) reported in APA format (for specifics, see Nuijten et al., 2015). An automatically generated report follows.

The scan detected 8 statistical results in APA format, of which 0 contained potentially incorrect statistical results, of which 0 may change statistical significance (alpha = .05). Potential one-tailed results were taken into account when 'one-sided', 'one-tailed', or 'directional' occurred in the text.

Note that these are not definitive results and require manual inspection to definitively assess whether results are erroneous.

Reference

Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M. (2015). The prevalence of statistical reporting errors in psychology (1985-2013). Behavior Research Methods. http://dx.doi.org/10.3758/s13428-015-0664-2

The scan detected 8 statistical results in APA format, of which 0 contained potentially incorrect statistical results, of which 0 may change statistical significance (alpha = .05). Potential one-tailed results were taken into account when 'one-sided', 'one-tailed', or 'directional' occurred in the text.

Note that these are not definitive results and require manual inspection to definitively assess whether results are erroneous.

Reference

Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M. (2015). The prevalence of statistical reporting errors in psychology (1985-2013). Behavior Research Methods. http://dx.doi.org/10.3758/s13428-015-0664-2

D. S. Schwarzkopf:

( September 15th, 2016 8:02am UTC )

Edit your feedback below

Interesting that StatCheck does not get the same result. The second test is obviously different in my calculation because of a rounding error and StatCheck takes this into account. But the first test doesn't seem to be a rounding error from what I can tell? I believe it is more likely I made a copy and paste error in that case or that it happened during copy-editing.

Enter your reply below (Please read the **How To**)

Michèle B. Nuijten:

( September 15th, 2016 8:26am UTC )

Edit your feedback below

statcheck takes correct rounding of the test statistic into account :)

so in your case, your t value could have been -1.375 rounded as -1.38, which actually does correspond to a p-value of .218.

so in your case, your t value could have been -1.375 rounded as -1.38, which actually does correspond to a p-value of .218.

D. S. Schwarzkopf:

( September 15th, 2016 8:39am UTC )

Edit your feedback below

Yes, as I said the second test is a rounding error. But what about the first? I can't see how this could be a rounding error.

Michèle B. Nuijten:

( September 15th, 2016 8:58am UTC )

Edit your feedback below

Neither of them are errors. You correctly rounded the obtained test statistics. In the first case, the actual test statistic was probably .495... with the accompanying p-value of .638, but you reported the test statistic as ".50" which is correctly rounded, but at a first glance doesn't seem to match the reported p-value of .638.

Your manual check therefore counted this as inconsistent, but statcheck takes this type of correct rounding into account, and didn't flag it as inconsistent.

Your manual check therefore counted this as inconsistent, but statcheck takes this type of correct rounding into account, and didn't flag it as inconsistent.

D. S. Schwarzkopf:

( September 15th, 2016 9:07am UTC )

Edit your feedback below

Ah of course you're right! Definitely not enough coffee yet it seems... I could have sworn I tried t=0.495 but apparently I missed it. Anyway, great no errors then! And perhaps StatCheck is smarter than one human after all ;)

Enter your reply below (Please read the **How To**)

Enter new comment below (Please read the **How To**)

The scan detected 8 statistical results in APA format, of which 2 contained potentially incorrect statistical results, of which 0 may change statistical significance (alpha = .05). Potential one-tailed results were taken into account when 'one-sided', 'one-tailed', or 'directional' occurred in the text.

The errors that may affect the computed p-value (but not the statistical significance) were reported as:

t(6) = 0.50, p = .638 (recalculated p-value: 0.63488)

t(6) = –1.38, p = .218 (recalculated p-value: 0.2168)

Note that these are definitive results but they could use automatic inspection to definitively assess whether I missed anything.

Permalink

Permalink

## Are you sure you want to delete your feedback?

1. To find out if PubPeer sends emails to corresponding authors when there is a comment. So far it hasn't in this case. Is this because it somehow figured out I am an author? Is this done by a person and verified?

2. Because I had discovered these errors and therefore I felt I should correct them. Interestingly, StatCheck hadn't autochecked them. Are these comments rolled out sequentially? (It also has the added benefit of allowing me to check the results if they do eventually come in...)

Anyway, this is not really an appropriate place to discuss the value of StatCheck auto-commenting on PubPeer. I can see both sides of this debate. I do feel that if StatCheck can clutter up PubPeer with 50K comments of unknown sensitivity and specificity then I think I am fully justified in posting a minor correction I know to be valid.

Permalink

## Are you sure you want to delete your feedback?

How To)## Are you sure you want to delete your feedback?