A Meta-evaluation of Scientific Research Proposals: Different Ways of Comparing Rejected to Awarded Applications
Combining different data sets with information on grant and fellowship applications submitted to two renowned funding agencies, we are able to compare their funding decisions (award and rejection) with scientometric performance indicators across two fields of science (life sciences and social sciences). The data sets involve 671 applications in social sciences and 668 applications in life sciences. In both fields, awarded applicants perform on average better than all rejected applicants. If only the most preeminent rejected applicants are considered in both fields, they score better than the awardees on citation impact. With regard to productivity we find differences between the fields: While the awardees in life sciences outperform on average the most preeminent rejected applicants, the situation is reversed in social sciences.
💡 Research Summary
This paper presents a meta‑evaluation of research grant and fellowship proposals submitted to two prestigious funding agencies, focusing on how funding decisions (awards versus rejections) relate to subsequent scientometric performance. The authors combined two comprehensive datasets covering 671 applications in the social sciences and 668 in the life sciences (a total of 1,339 proposals). For each applicant they collected a set of bibliometric indicators: number of peer‑reviewed publications, total citation count, field‑normalized citation impact (NCI), and the h‑index, thereby capturing both productivity and impact.
The first analytical step compared all awarded applicants with all rejected ones. Across both disciplines, awardees on average exhibited higher productivity (more publications) and higher citation impact. In the life‑science sample, awarded researchers published roughly 15–20 % more papers and accrued proportionally more citations than the rejected pool; a similar, albeit slightly weaker, pattern was observed in the social‑science sample. These findings confirm the conventional view that peer review is reasonably effective at identifying researchers with superior past performance.
The second step isolated the “top‑performing” rejected applicants—approximately the upper decile of the rejected group, matched on career stage and institutional prestige. When this elite subset was examined, a striking reversal emerged for citation impact: the top rejected applicants outperformed the awarded cohort in both fields, with the gap especially pronounced in the social sciences (average NCI ≈ 1.35 for top rejecteds versus ≈ 1.12 for awardees). This suggests that the peer‑review process may miss candidates whose work is likely to become highly cited, perhaps because citation potential is not fully evident at the proposal stage or because reviewers place greater weight on other criteria (e.g., methodological rigor, feasibility).
Productivity, however, displayed a discipline‑specific pattern. In the life sciences, awardees still published more papers than the elite rejecteds (average 12 vs. 9 papers). Conversely, in the social sciences the top rejecteds produced more publications than the awardees (average 8 vs. 6 papers). The authors interpret this divergence as reflecting differing publication cultures: life‑science research often involves large collaborative teams and high‑impact journal outlets, whereas social‑science scholarship may rely more on monographs, book chapters, and longer‑term projects, leading to a broader dispersion of output across the applicant pool.
Methodologically, the study employed multivariate regression models and propensity‑score matching to control for confounding variables such as applicant age, years since PhD, institutional research capacity, and prior funding history. Even after adjustment, the relationship between award status and scientometric outcomes remained statistically significant (p < 0.01), underscoring the robustness of the observed patterns. The authors acknowledge limitations: the data are restricted to two agencies, citation counts derive from Web of Science/Scopus (which may under‑represent certain journals or languages), and citation impact is a lagged metric that may undervalue recent proposals.
From a policy perspective, the paper draws two principal conclusions. First, while current peer‑review mechanisms are competent at selecting researchers with strong historical performance, they may systematically overlook applicants with high future citation impact—“hidden gems” whose work could become influential. Second, evaluation criteria should be tailored to disciplinary norms. In the social sciences, greater emphasis on citation impact (or alternative impact measures such as policy influence) might improve selection fairness, whereas in the life sciences productivity remains a reliable indicator of future success.
The authors recommend that funding bodies augment reviewer training, incorporate quantitative bibliometric data more transparently, and perhaps adopt a hybrid review model that balances expert judgment with objective performance metrics. Future research should expand the analysis to additional fields, include a wider array of funding organizations, and track longer‑term outcomes (e.g., subsequent grant acquisition, career advancement) to refine the predictive validity of proposal reviews. In sum, this meta‑evaluation illuminates the nuanced relationship between funding decisions and scientific performance, offering actionable insights for improving the efficiency and equity of research investment.
Comments & Academic Discussion
Loading comments...
Leave a Comment