Should journals post accepted papers for extra peer review, before official publication?
This year, I have discussed several errors or flaws in recent journal articles (e.g., 1, 2, 3, 4). For some new examples, I think that Figure 2 of Cargile 2021 reported estimates for the feminine factor instead of, as labeled, the masculine factor, and Fenton and Stephens-Dougan 2021 described a "very small" 0.01 odds ratio as "not substantively meaningful":
Finally, the percent Black population in the state was also associated with a statistically significant decline in responsiveness. However, it is worth noting that this decline was not substantively meaningful, given that the odds ratio associated with this variable was very small (.01).
I'll discuss more errors or flaws in the notes below, with more blog posts planned.
---
Given that peer review and/or the editing process will miss errors that readers can catch, it seems like it would be a good idea for journal editors to get more feedback before an article is published.
For example, the Journal of Politics has been posting "Just Accepted" manuscripts before the final formatted version of the manuscript is published, which I think permits the journal to correct errors that readers catch in the posted manuscripts.
The Journal of Politics recently posted the manuscript for Baum et al. "Sensitive Questions, Spillover Effects, and Asking About Citizenship on the U.S. Census". I think that some of the results reported in the text do not match the corresponding results reported in Table 1. For example, the text (numbered p. 4) indicates that:
Consistent with expectations, we also find this effect was more pronounced for Hispanics, who skipped 4.21 points more of the questions after the Citizenship Treatment was introduced (t-statistic = 3.494, p-value is less than 0.001).
However, from what I can tell, the corresponding Table 1 result indicates a 4.49 difference, with a t-statistic of 3.674.
---
Another potential flaw in the above statement is that, from what I can tell, the t-statistic for the "more pronounced for Hispanics" claim is based on a test of whether the estimate among Hispanics differs from zero. However, the t-statistic for the "more pronounced for Hispanics" claim should instead be from a test of whether the estimate among Hispanics differs from the estimate among non-Hispanics or whatever comparison category the "more pronounced" refers to.
---
So, to the extent that these aforementioned issues are errors or flaws, maybe these can be addressed before the Journal of Politics publishes the final formatted version of the Baum et al. manuscript.
---
NOTES
1. I think that this is an error, from Lucas and Silber Mohamed 2021, with emphasis added:
Moreover, while racial sympathy may lead to some respondents viewing non-white candidates more favorably, Chudy finds no relationship between racial sympathy and gender sympathy, nor between racial sympathy and attitudes about gendered policies.
That seemed a bit unlikely to me when I read it, and, sure enough, Chudy 2020 footnote 20 indicates that:
The raw correlation of the gender sympathy index and racial sympathy index was .3 for the entire sample (n = 1,000) and .28 for whites alone (n = 751).
2. [sic] errors in Jardina and Stephens-Dougan 2021. Footnote 25:
The Stereotype items were note included on the 2020 ANES Time Series study.
...and the Section 4 heading:
Are Muslim's part of a "band of others?"
... and the Table 2 note:
2016 ANES Time Serie Study
Moreover, the note for Jardina and Stephens-Dougan 2021 Figure 1 describes the data source as: "ANES Cumulative File (face-to-face respondents only) & 2012 ANES Times Series (all modes)". But, based on the text and the other figure notes, I think that this might refer to 2020 instead of 2012.
These things happen, but I think that it's worth noting, at least as evidence against the idea that peer reviews shouldn't note grammar-type errors.
3. I discussed conditional-acceptance comments in my PS symposium entry "Left Unchecked".
Leave a Reply