Comment: Bibliometrics in the Context of the UK Research Assessment Exercise

Comment: Bibliometrics in the Context of the UK Research Assessment   Exercise
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Research funding and reputation in the UK have, for over two decades, been increasingly dependent on a regular peer-review of all UK departments. This is to move to a system more based on bibliometrics. Assessment exercises of this kind influence the behavior of institutions, departments and individuals, and therefore bibliometrics will have effects beyond simple measurement. [arXiv:0910.3529]


💡 Research Summary

The paper by B. W. Silverman provides a critical commentary on the role of bibliometrics within the United Kingdom’s Research Assessment Exercise (RAE) and the potential shift from a peer‑review‑driven system to one that relies heavily on quantitative citation‑based metrics. Silverman begins by outlining the historical context: for more than two decades the RAE has been the principal mechanism by which UK universities are evaluated for research quality, and consequently how research funding and institutional reputation are allocated. Unlike teaching funds, which are distributed on a flat per‑student basis, research funds are awarded competitively based on the outcomes of the RAE. The assessment covers every discipline, and the results have a profound impact on departmental hiring, strategic planning, and the mobility of leading scholars.

Silverman draws on his experience as chair of the 2008 RAE panel for Probability, Statistics, and Operational Research to argue that peer review must remain at the core of any future assessment framework. He warns that replacing or diluting peer review with purely bibliometric or quantitative methods would introduce serious biases into the evaluation process and, more importantly, would shape institutional and individual behavior in undesirable ways.

The paper identifies several mechanisms through which the current RAE already influences behavior. Positive effects include incentives for new entrants to the profession, as the RAE allows recent hires to submit a smaller corpus of work and provides a “vitality” score for departments. This has helped stimulate recruitment and increased mobility of top researchers. Negative effects arise from the fixed census date, which creates a “boom‑bust” hiring cycle: institutions hire large numbers of staff shortly before the deadline, then face a hiring moratorium afterward. Moreover, the inclusion of grant income in the assessment pressures faculty to pursue grant‑supported projects rather than more independent or exploratory research.

Silverman then turns to the proposed bibliometric shift. Proponents argue that, when aggregated across whole universities, metrics such as citation counts and journal impact factors correlate strongly with peer‑review outcomes, suggesting that a metrics‑based system could serve as a cost‑effective proxy. Silverman challenges this claim on several grounds. First, aggregation masks disciplinary differences; fields like mathematics and statistics generate fewer citations, so a department’s performance would be unfairly penalized. Second, reliance on citation counts encourages a “publish or perish” mentality focused on short‑term, highly citable work, potentially crowding out long‑term, foundational research. Third, institutions might respond by manipulating hiring practices—either limiting new hires to protect citation averages or aggressively recruiting highly cited external scholars—to boost metric scores. Fourth, individual researchers whose work is influential in ways not captured by short‑term citations (e.g., methodological contributions, software development) would be undervalued.

Silverman also critiques the use of journal impact factors as a surrogate for paper quality, likening it to judging an individual’s wealth by their country’s average GDP. While knowledge of a journal’s editorial standards can inform a judgment about a paper’s likely quality, impact factors are a blunt instrument that do not reflect the nuanced peer‑review process.

In conclusion, Silverman acknowledges that bibliometric data can be useful as supplementary information—providing a “useful servant” in certain circumstances—but insists that they are a “very poor master.” He advocates for retaining peer review as the primary assessment mechanism, using metrics only as ancillary tools, and warns that an overreliance on bibliometrics could distort research incentives, marginalize low‑citation disciplines, and ultimately undermine the very purpose of the RAE: to support high‑quality, innovative research.


Comments & Academic Discussion

Loading comments...

Leave a Comment