Throwing Out the Baby with the Bathwater: The Undesirable Effects of National Research Assessment Exercises on Research

Throwing Out the Baby with the Bathwater: The Undesirable Effects of   National Research Assessment Exercises on Research

The evaluation of the quality of research at a national level has become increasingly common. The UK has been at the forefront of this trend having undertaken many assessments since 1986, the latest being the Research Excellence Framework in 2014. The argument of this paper is that, whatever the intended results in terms of evaluating and improving research, there have been many, presumably unintended, results that are highly undesirable for research and the university community more generally. We situate our analysis using Bourdieu’s theory of cultural reproduction and then focus on the peculiarities of the 2008 RAE and the 2014 REF the rules of which allowed for, and indeed encouraged, significant game-playing on the part of striving universities. We conclude with practical recommendations to maintain the general intention of research assessment without the undesirable side-effects.


💡 Research Summary

The paper provides a critical examination of national research assessment exercises, focusing on the United Kingdom’s Research Assessment Exercise (RAE) of 2008 and the Research Excellence Framework (REF) of 2014. Using Pierre Bourdieu’s theory of cultural reproduction as an analytical lens, the author argues that while these assessments were introduced with the intention of evaluating and improving research quality, they have generated a series of unintended and largely undesirable consequences for the academic community.

First, the author outlines Bourdieu’s concepts of cultural capital and the academic field, emphasizing how external evaluation mechanisms inject a new form of “assessment capital” that reshapes the internal dynamics of the scholarly field. The RAE and REF operationalise this by converting research outputs, citations, grant income, and even research environment metrics into quantifiable scores. This scoring system creates strong incentives for universities to engage in strategic “game‑playing” to maximise their points.

The paper identifies four major adverse effects. (1) Conservatism of research topics – scholars gravitate toward “safe” projects that are likely to score well, marginalising high‑risk, innovative work and reducing disciplinary diversity. (2) Hierarchisation of academic labour – institutions differentiate between “core” researchers who generate most of the assessment points and peripheral staff, reinforcing existing hierarchies and limiting career progression for early‑career academics. (3) Administrative overload – the need to package and submit data within tight assessment windows forces researchers to devote substantial time to bureaucratic tasks, detracting from genuine scholarly inquiry. (4) Suppression of international collaboration – because the assessment framework favours outputs attributed to UK institutions, scholars often avoid cross‑border partnerships that could jeopardise their scores.

These dynamics collectively erode the very qualities the assessments aim to promote: intellectual creativity, methodological pluralism, and a healthy competitive environment. The author warns that such side‑effects are not unique to the UK but represent a broader risk inherent in any national evaluation system that overly quantifies research performance.

In the concluding section, the paper offers concrete policy recommendations to mitigate these harms while preserving the beneficial aspects of research assessment. Suggested reforms include: (a) broadening evaluation criteria to recognise long‑term impact and interdisciplinary work; (b) increasing transparency of scoring algorithms to reduce gaming incentives; (c) streamlining data submission through automated platforms to lessen administrative burdens; and (d) explicitly rewarding international co‑authorship and collaborative grants to encourage global research networks. By implementing these changes, the author contends that assessment exercises can better align with their original purpose—enhancing research quality and informing resource allocation—without sacrificing the vitality of the academic ecosystem.