Uptake and outcome of manuscripts in Nature journals by review model and author characteristics

Uptake and outcome of manuscripts in Nature journals by review model and   author characteristics
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Double-blind peer review has been proposed as a possible solution to avoid implicit referee bias in academic publishing. The aims of this study are to analyse the demographics of corresponding authors choosing double blind peer review, and to identify differences in the editorial outcome of manuscripts depending on their review model. Data includes 128,454 manuscripts received between March 2015 and February 2017 by 25 Nature-branded journals. Author uptake for double-blind was 12%. We found a small but significant association between journal tier and review type. We found no statistically significant difference in the distribution of peer review model between males and females. We found that corresponding authors from the less prestigious institutions are more likely to choose double-blind review. In the ten countries with the highest number of submissions, we found a small but significant association between country and review type. The outcome at both first decision and post review is significantly more negative (i.e. a higher likelihood for rejection) for double than single-blind papers. Authors choose double-blind review more frequently when they submit to more prestigious journals, they are affiliated with less prestigious institutions or they are from specific countries; the double-blind option is also linked to less successful editorial outcomes.


💡 Research Summary

This study investigates the uptake of double‑blind peer review (DBPR) versus single‑blind peer review (SBPR) and the associated editorial outcomes across 25 Nature‑branded journals over a two‑year period (March 2015–February 2017). The dataset comprises 128,454 manuscripts after excluding 5,011 records with missing review‑type information. For the analysis of author uptake, only direct submissions (106,373 manuscripts) were considered, because transferred manuscripts retain the original review‑type choice and therefore cannot inform the decision‑making process.

Author gender was inferred using the Gender API, retaining only assignments with a confidence score of at least 80 %. This yielded a “Gender Dataset” of 83,256 records (61,536 male, 15,060 female, 6,660 NA). Institutional affiliation was normalized via the GRID API and linked to the 2016/2017 Times Higher Education (THE) rankings. Institutions were grouped into three prestige categories: 1 (THE rank 1‑10), 2 (rank 11‑100), and 3 (rank > 100); a fourth “unranked” group was excluded from most analyses. This produced an “Institution Dataset” of 58,920 records.

Key descriptive findings: overall DBPR uptake was 12 % (12,631 manuscripts). Uptake varied by journal tier: flagship Nature (14 % DBPR), sister disciplinary journals (12 %), and Nature Communications (9 %). A Pearson chi‑square test showed a statistically significant but modest association between journal tier and review model (χ² = 378.17, df = 2, p < 2.2 × 10⁻¹⁶, Cramer’s V = 0.054). Gender showed no association with review model (χ² = 0.25, df = 1, p = 0.618). Institutional prestige displayed a clear pattern: authors from lower‑ranked institutions chose DBPR more often (4 % of group 1, 8 % of group 2, 13 % of group 3). Country‑level analysis of the ten highest‑submission nations revealed small but significant differences in DBPR uptake.

Outcome analysis demonstrated that DBPR manuscripts faced a higher probability of rejection at both the first decision and after full peer review. The rejection rate for DBPR papers was approximately 5–7 % higher than for SBPR papers, a difference that remained statistically significant after controlling for journal tier and institution prestige.

The authors acknowledge several limitations. First, manuscript quality was not independently measured, preventing a clear distinction between bias against authors and genuine quality differences. Second, the observational design lacks a controlled experiment where the same manuscript is evaluated under both review models. Third, gender inference based on first names can be inaccurate for non‑Western naming conventions, and a substantial number of records remained “NA”. Fourth, the categorization of institutional prestige is arbitrary and may affect the observed uptake patterns. Finally, transferred manuscripts were excluded from the uptake analysis, which could bias estimates if transfer behavior differs by review model.

Despite these constraints, this is the first large‑scale, cross‑disciplinary assessment of DBPR within the high‑impact Nature portfolio. The findings suggest that, contrary to the expectation that DBPR levels the playing field, authors who opt for DBPR experience less favorable editorial outcomes. This raises questions about the practical efficacy of DBPR as a bias‑mitigation tool in elite journals. The study recommends that publishers and editors critically evaluate the implementation of DBPR, consider complementary measures (e.g., reviewer training, multi‑blind systems), and conduct further research to disentangle author‑driven selection effects from genuine peer‑review bias.


Comments & Academic Discussion

Loading comments...

Leave a Comment