Severe Language Effect in University Rankings: Particularly Germany and France are wronged in citation-based rankings

Severe Language Effect in University Rankings: Particularly Germany and   France are wronged in citation-based rankings

We applied a set of standard bibliometric indicators to monitor the scientific state-of-arte of 500 universities worldwide and constructed a ranking on the basis of these indicators (Leiden Ranking 2010). We find a dramatic and hitherto largely underestimated language effect in the bibliometric, citation-based measurement of research performance when comparing the ranking based on all Web of Science (WoS) covered publications and on only English WoS covered publications, particularly for Germany and France.


💡 Research Summary

The paper investigates how language influences citation‑based university rankings by applying a suite of standard bibliometric indicators to a sample of 500 universities worldwide and constructing the Leiden Ranking 2010. The authors retrieved all publications indexed in the Web of Science (WoS) for each institution and then created two parallel datasets: (1) the full set of WoS‑covered papers regardless of language, and (2) a subset containing only English‑language papers. For each dataset they calculated conventional metrics such as total publication count, total citations, average citations per paper, and field‑normalized citation impact, and used these to generate institutional rankings.

A systematic comparison of the two rankings revealed a pronounced “language effect.” Institutions from non‑English‑dominant countries, especially Germany and France, suffered substantial rank penalties when the full multilingual dataset was used. When the analysis was restricted to English‑only papers, German and French universities typically rose by 30–40 positions on average. This shift is attributed to the lower international citation rates of papers published in German, French, or other non‑English languages, which are less likely to be cited by the predominantly English‑speaking scholarly community. In contrast, universities in the United Kingdom, United States, and the Scandinavian region showed negligible differences because more than 90 % of their output is already in English.

The authors argue that this language bias has far‑reaching implications. Many global ranking systems, funding agencies, and policy makers rely on citation‑based indicators that implicitly favor English‑language output, thereby undervaluing the genuine research contributions of institutions that publish extensively in other languages. This can affect national research strategies, allocation of resources, and the perceived international competitiveness of universities. To mitigate the bias, the paper proposes several remedial measures: (i) applying language‑specific weighting factors to citations, (ii) expanding bibliometric databases to better capture non‑English literature, and (iii) developing multilingual normalization models that adjust impact scores according to the typical citation behavior of each language.

The study acknowledges limitations, notably that WoS does not comprehensively index all non‑English journals and that citation metrics alone do not reflect the full spectrum of research performance (e.g., patents, industry collaborations, societal impact). Nevertheless, the empirical demonstration of a sizable language effect underscores the need for more inclusive bibliometric practices. Future work should incorporate additional data sources such as Scopus, Google Scholar, and regional databases, and test the proposed weighting schemes in real‑world ranking calculations to assess their effectiveness in producing a fairer, more balanced global university assessment.