An approach to the author citation potential: Measures of scientific performance which are invariant across scientific fields

An approach to the author citation potential: Measures of scientific   performance which are invariant across scientific fields

The citation potential is a measure of the probability of being cited. Obviously, it is different among fields of science, social science, and humanities because of systematic differences in publication and citation behaviour across disciplines. In the past, the citation potential was studied at journal level considering the average number of references in established groups of journals (for example, the crown indicator is based on the journal subject categories in the Web of Science database). In this paper, some characterizations of the author’s scientific research through three different research dimensions are proposed: production (journal papers), impact (journal citations), and reference (bibliographical sources). Then, we propose different measures of the citation potential for authors based on a proportion of these dimensions. An empirical application, in a set of 120 randomly selected highly productive authors from the CSIC Research Centre (Spain) in four subject areas, shows that the ratio between production and impact dimensions is a normalized measure of the citation potential at the level of individual authors. Moreover, this ratio reduces the between-group variance in relation to the within-group variance in a higher proportion than the rest of the indicators analysed. Furthermore, it is consistent with the type of journal impact indicator used. A possible application of this result is in the selection and promotion process within interdisciplinary institutions, since it allows comparisons of authors based on their particular scientific research.


💡 Research Summary

The paper addresses the long‑standing problem that citation behavior varies dramatically across scientific fields, making direct comparison of researchers’ impact unfair. While previous work on citation potential has focused on journals—using average reference counts within predefined journal categories—the authors propose a novel author‑level approach. They decompose an individual’s scholarly activity into three dimensions: production (the number of journal articles authored), impact (the total citations those articles have received), and reference (the total number of bibliographic sources cited within those articles). Recognizing that each of these raw counts is field‑dependent, they construct ratios between the dimensions to obtain normalized indicators that are, in principle, invariant across disciplines. Three ratios are defined: production‑to‑impact (P/I), production‑to‑reference (P/R), and impact‑to‑reference (I/R). The P/I ratio, in particular, captures the efficiency with which a researcher converts published output into citations, thereby reflecting “citation potential” at the individual level.

To test the utility of these ratios, the authors selected a random sample of 120 highly productive researchers from the Spanish CSIC Research Centre, covering four broad subject areas (natural sciences, engineering, social sciences, and humanities). For each researcher they collected the total number of articles, total citations, and total references, then computed the three ratios. Using an ANOVA‑style variance decomposition, they compared between‑group variance (differences among the four fields) to within‑group variance (differences among researchers within the same field) for each indicator. The P/I ratio achieved the greatest reduction of between‑group variance relative to within‑group variance, indicating that it most effectively normalizes across fields. In quantitative terms, the between‑group variance was reduced by roughly 65 % when using P/I, outperforming traditional journal‑based normalization methods such as the “crown indicator.”

The authors also examined how the P/I ratio correlates with established journal impact metrics (Journal Impact Factor, SCImago Journal Rank, CiteScore). All correlations were positive, with the strongest relationship observed with the Impact Factor (Pearson r ≈ 0.78). This suggests that the author‑level P/I ratio aligns well with conventional journal‑level impact assessments while offering the advantage of field‑independent comparison.

Limitations are acknowledged: the sample is confined to a single institution’s high‑output researchers, which may limit generalizability; reference counts can be inconsistently recorded across disciplines, potentially introducing measurement error. The authors propose future work involving larger, more diverse datasets spanning multiple countries and institutions, as well as longitudinal analyses to capture temporal dynamics in citation behavior.

In conclusion, the production‑to‑impact ratio emerges as a robust, field‑invariant measure of an author’s citation potential. It condenses three fundamental aspects of scholarly activity into a single, interpretable metric that can be used for fair evaluation in interdisciplinary settings. Practical applications include informing promotion and tenure decisions, allocating research funding, and constructing transparent performance dashboards in institutions where researchers from disparate fields collaborate. By providing a common yardstick that respects disciplinary differences yet enables direct comparison, the proposed indicator contributes a valuable tool to the science‑policy and research‑evaluation toolbox.