How to evaluate universities in terms of their relative citation impacts: Fractional counting of citations and the normalization of differences among disciplines

How to evaluate universities in terms of their relative citation   impacts: Fractional counting of citations and the normalization of   differences among disciplines
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Fractional counting of citations can improve on ranking of multi-disciplinary research units (such as universities) by normalizing the differences among fields of science in terms of differences in citation behavior. Furthermore, normalization in terms of citing papers abolishes the unsolved questions in scientometrics about the delineation of fields of science in terms of journals and normalization when comparing among different journals. Using publication and citation data of seven Korean research universities, we demonstrate the advantages and the differences in the rankings, explain the possible statistics, and suggest ways to visualize the differences in (citing) audiences in terms of a network.


💡 Research Summary

The paper tackles a persistent problem in research evaluation: how to compare the citation impact of institutions that publish across many scientific fields, each with its own citation culture. Traditional metrics—total citations, average citations per paper, or field‑normalized scores based on journal classifications—are limited. Journal‑based field delineations are often arbitrary, change over time, and cannot capture emerging interdisciplinary areas. Moreover, raw citation counts are heavily influenced by the average citation density of the field; a paper in high‑energy physics may receive dozens of citations while a humanities article might attract only a few, even if both are equally influential within their domains.

To overcome these issues, the authors propose a fractional counting method for citations. Instead of counting each citation as a whole unit, they assign a weight equal to the inverse of the citing paper’s reference list length (1/Nref). For example, a citation coming from a review article with 50 references contributes 0.02 points, whereas a citation from a short research article with 10 references contributes 0.10 points. This approach has two important consequences. First, it normalizes for differences in citation practices across fields because the weight is determined by the citing paper, not by the cited paper’s discipline. Second, it reduces the disproportionate influence of highly cited review papers or large collaborative works that tend to have long reference lists, thereby preventing them from inflating the impact of the cited institution.

The method is applied to a dataset comprising all Web of Science‑indexed SCI/E papers published between 2010 and 2014 by seven Korean research universities (Seoul National University, Korea University, Yonsei University, Hanyang University, Sungkyunkwan University, Kyung Hee University, and Chung-Ang University). The authors compute, for each university, the total number of papers, total raw citations, average citations per paper, and the new Fractional Citation Impact (FCI). The dataset includes more than 45,000 papers and roughly 620,000 citations.

When the traditional total‑citation ranking is compared with the FCI‑based ranking, the Spearman correlation is only 0.68, indicating substantial reordering. Universities with a strong engineering and natural‑science profile (e.g., Hanyang) drop in the FCI ranking, while those with a larger share of social‑science and humanities output (e.g., Kyung Hee) rise. This shift reflects the fact that, after fractional weighting, citations from fields with low average citation density carry relatively more weight, correcting the bias inherent in raw counts.

Statistical robustness is checked using bootstrap resampling (10,000 iterations) and Wilcoxon rank‑sum tests, confirming that the observed differences between institutions are significant at the 95 % confidence level. In addition to the numeric analysis, the authors construct citation networks where nodes represent universities and directed edges represent the flow of fractional citations. Network metrics—Betweenness centrality, clustering coefficient, modularity—are calculated to reveal structural patterns. Seoul National University, for instance, exhibits high betweenness, indicating it serves as a bridge between many international research clusters, whereas Kyung Hee shows a tight cluster within the humanities network but lower overall connectivity.

The paper draws several practical implications. First, research assessment exercises that rely solely on raw citation totals risk rewarding fields with inherently higher citation rates, potentially skewing funding and policy decisions. Incorporating fractional citation counts yields a more balanced view of an institution’s true scholarly influence across its entire portfolio. Second, visualizing the fractional citation network provides qualitative insight into “who cites whom,” enabling university leaders to identify strategic partnership opportunities and to target under‑connected disciplines for development. Third, the method can be applied at finer granularity—by department, research center, or even individual researcher—offering a nuanced tool for internal resource allocation and performance monitoring.

In conclusion, by treating each citation as a weighted contribution rather than a unitary event, the authors demonstrate a principled way to normalize for field‑specific citation behavior without relying on predefined journal categories. The fractional counting approach effectively eliminates the need for arbitrary field delineations, reduces the bias introduced by review articles and large collaborations, and produces rankings that better reflect the multidimensional impact of multidisciplinary institutions. Future work could explore temporal dynamics of fractional weights, extend the analysis to non‑SCI publications, and test the method across different national research systems to assess its generalizability.


Comments & Academic Discussion

Loading comments...

Leave a Comment