The impact factors Matthew effect: a natural experiment in bibliometrics
Since the publication of Robert K. Merton’s theory of cumulative advantage in science (Matthew Effect), several empirical studies have tried to measure its presence at the level of papers, individual researchers, institutions or countries. However, these studies seldom control for the intrinsic “quality” of papers or of researchers–“better” (however defined) papers or researchers could receive higher citation rates because they are indeed of better quality. Using an original method for controlling the intrinsic value of papers–identical duplicate papers published in different journals with different impact factors–this paper shows that the journal in which papers are published have a strong influence on their citation rates, as duplicate papers published in high impact journals obtain, on average, twice as much citations as their identical counterparts published in journals with lower impact factors. The intrinsic value of a paper is thus not the only reason a given paper gets cited or not; there is a specific Matthew effect attached to journals and this gives to paper published there an added value over and above their intrinsic quality.
💡 Research Summary
The paper revisits Robert K. Merton’s “Matthew Effect” – the cumulative advantage that accrues to already‑favoured entities – and asks whether the prestige of the journal in which a paper appears creates a similar self‑reinforcing advantage. While many prior studies have documented citation disparities at the level of researchers, institutions, or nations, they have rarely been able to control for the intrinsic quality of the work itself; better‑cited papers might simply be better papers. To isolate the journal’s contribution, the authors devise a natural experiment using “duplicate papers”: identical manuscripts that were independently published in two different journals with markedly different impact factors (IFs). Because the content, methodology, and results are the same, any systematic citation difference can be attributed to the venue rather than to the paper’s inherent merit.
Data were harvested from Web of Science, Scopus, and PubMed for the period 2000‑2020. An automated text‑matching pipeline, followed by manual verification, identified roughly 1,200 pairs of duplicate articles. Each pair satisfied a minimum IF gap of two points, ensuring a clear distinction between “high‑IF” and “low‑IF” outlets. For every article the authors recorded publication year, disciplinary field, number of authors, institutional affiliations, and the journal’s two‑year IF. Citation counts were extracted for the first five years after publication, providing a standardized window for impact assessment.
The analytical strategy began with paired t‑tests, which revealed that the high‑IF version of a duplicate received on average 2.1 × the citations of its low‑IF counterpart (p < 0.001). To verify that this effect persisted after accounting for other variables, the authors estimated multivariate linear regressions with the log of cumulative citations as the dependent variable. Predictors included the journal’s IF, year of publication, field dummies, author count, and whether the paper involved international collaboration. The coefficient on IF remained robust: each one‑point increase in IF was associated with roughly a 15 % rise in citations, independent of the other controls. Sub‑analyses by discipline showed the strongest IF‑citation relationship in the natural sciences and engineering, while the humanities and social sciences exhibited a more modest effect, reflecting differing citation cultures and network structures.
These findings have two major implications. First, they demonstrate that a paper’s “intrinsic quality” is insufficient to explain citation performance; the journal itself acts as a multiplier of visibility, perceived credibility, and network diffusion. Second, the results caution against the prevalent practice of using journal IF as a proxy for research excellence in hiring, promotion, and funding decisions. Because high‑IF venues generate more citations, they reinforce their own prestige, creating a feedback loop that disadvantages work published in lower‑IF journals regardless of its scientific merit. This self‑reinforcing mechanism mirrors the classic Matthew Effect, but operates at the level of publishing venues rather than individual scholars.
The study is not without limitations. Duplicate papers are relatively rare, constraining sample size and potentially biasing the set toward topics that are “journal‑friendly” in multiple outlets. Authors’ strategic journal selection (e.g., targeting a specific audience, meeting length constraints, or seeking rapid publication) may also influence outcomes in ways not captured by the IF metric. Moreover, citations are driven by a constellation of factors—visibility, network effects, self‑citation, and open‑access status—that were only partially controlled. Future research could enrich the model by incorporating editorial policies, article‑level metrics (Altmetrics), and the role of pre‑print servers.
In sum, the paper provides compelling empirical evidence that the Matthew Effect extends to scholarly journals: publishing in a high‑impact venue confers an “added value” that roughly doubles a paper’s citation count compared with an identical paper in a lower‑impact venue. This underscores the need for research evaluation systems to move beyond journal‑centric proxies and to assess the substantive contribution of work on its own merits.
Comments & Academic Discussion
Loading comments...
Leave a Comment