When research assessment exercises leave room for opportunistic behavior by the subjects under evaluation

Reading time: 5 minute
...

📝 Original Info

  • Title: When research assessment exercises leave room for opportunistic behavior by the subjects under evaluation
  • ArXiv ID: 1810.13216
  • Date: 2023-05-18
  • Authors: Researchers from original ArXiv paper

📝 Abstract

This study inserts in the stream of research on the perverse effects that PBRF systems can induce in the subjects evaluated. The authors' opinion is that more often than not, it is the doubtful scientific basis of the evaluation criteria that leave room for opportunistic behaviors. The work examines the 2004-2010 Italian national research assessment (VQR) to verify possible opportunistic behavior by universities in order to limit the penalization of their performance (and funding) due to the presence of scientifically unproductive professors in faculty. In particular, institutions may have favored "gift authorship" practices. The analysis thus focuses on the output of professors who were unproductive in the VQR publication window, but became productive ("new productives") in the following five years: a number of universities show a remarkably higher than average share of publications by new productives that are in co-authorship exclusively with colleagues from the same university.

💡 Deep Analysis

Deep Dive into When research assessment exercises leave room for opportunistic behavior by the subjects under evaluation.

This study inserts in the stream of research on the perverse effects that PBRF systems can induce in the subjects evaluated. The authors’ opinion is that more often than not, it is the doubtful scientific basis of the evaluation criteria that leave room for opportunistic behaviors. The work examines the 2004-2010 Italian national research assessment (VQR) to verify possible opportunistic behavior by universities in order to limit the penalization of their performance (and funding) due to the presence of scientifically unproductive professors in faculty. In particular, institutions may have favored “gift authorship” practices. The analysis thus focuses on the output of professors who were unproductive in the VQR publication window, but became productive (“new productives”) in the following five years: a number of universities show a remarkably higher than average share of publications by new productives that are in co-authorship exclusively with colleagues from the same university.

📄 Full Content

Governments are clearly aware of the challenges of competitiveness in an increasingly global and knowledge-based economy. Given this, many countries have developed policies for improving the effectiveness and efficiency of their national scientific infrastructure. These have included the introduction of New Public Management (NPM) tools in national research institutions, such as systems of performance-based research funding (PBRF) (Hicks, 2012;Lewis, 2013;Woelert, 2015). PBRF systems were first implemented in the late 1980s and early 1990s in English-speaking countries, and then spread to western Europe, eastern Europe and Asia. By 2016, in the European Union alone, at least nine countries had implemented comprehensive PBRF systems (Jonkers & Zacharewicz, 2016), each with its own national characteristics.

The increasing influence and diffusion of these systems respond to a series of aims, including: i) guaranteeing and improving legitimacy, accountability and effectiveness of public spending in research; ii) increasing awareness of the importance of public research in developing competitive knowledge societies (Whitley & Glaser, 2007;Geuna & Martin, 2003); iii) improving research performance and concentrating resources in the best performing organizations (OECD, 2010;Woelert & McKenzie, 2018;Jonkers & Zacharewicz, 2016;Abramo, 2017).

PBRFs are generally based on national research evaluation systems, which are in turn becoming increasingly common. These national systems can be differentiated according to: i) the choice of units to be evaluated (individuals, departments, entire research institutions); ii) the way in which research activities are evaluated (observation period, type of output evaluated, indicators used, methodology adopted -quantitative vs qualitative forms of evaluations); iii) proportion and types of funding allocation associated with PBRF mechanisms. Government funds are generally allocated at an aggregate level based on the results of the evaluation process, leaving the task of internal allocation to the research institutions.

PBRF systems can have various effects on the strategic and organizational management of the structures under assessment, as well as on the behavior of individual researchers (Geuna & Martin, 2003), particularly when institutions deploy incentives at the individual level, through monetary rewarding systems, access to resources, career advancement, recruitment, etc.. (Moher et al., 2018).

A central question concerning PBRF systems is how and to what extent system-wide incentives can translate into local management practices that influence production effectiveness, efficiency, and recruitment and promotion processes (Espeland & Sauder, 2007;Sauder & Espeland, 2009). Woelert and McKenzie (2018) analyzed the deployment of the Australian national PBRF system within the individual research institutions. The authors “find that universities overwhelmingly replicate the major national PBRF indicators internally. If variation was evident, then [it is] mostly in the form of minor modifications to these indicators, not in the choice of indicators per se. Analysis of the Australian case thus demonstrates strong vertical alignment between national and institutional research governance mechanisms as well as considerable convergence in the formal organization and governance of research activities at Australian universities.” In Norway, findings are “puzzling” (Aagaard, 2015). In many instances, the author finds “a quite tight coupling between system-level incentives and local practices. A large variation across institutions, fields and departments is, however, also observed.” Some scholars have also considered the important issue of the perverse effects that PBRF systems could have on evaluated subjects, in addition to the expected positive results. These include generation of perverse incentives, inducing scientific misconduct (e.g. multiplication of irrelevant publications, plagiarism, self-plagiarism, scientific fraud) (Hazelkorn, 2010;Edwards & Roy, 2017), and discouraging interdisciplinary and innovative research, and research diversification (Hicks, 2012;Rafols, Leydesdorff, O’Hare, Nightingale, & Stirling, 2012;Wilsdon, 2016, Abramo, D’Angelo & Di Costa, 2018). PBRF systems also present their own direct and indirect costs, which are often underestimated or ignored.

In 2017, the Journal of Informetrics dedicated a special section to PBRF systems and their effects on scientists’ behavior (Volume 11, Number 3). The debate opened with a discussion paper by van den Besselaar, Heyman, and Sandström (2017a), who object to the results of Butler’s pioneering works (Butler, 2003a;2003b) on the effects of the Australian PBRF system, in which significant funds were distributed to universities, and then within them, on the basis of aggregate publication counts, with little attention paid to the impact of that output. Butler (2003a;2003b) found that over the decade examined there had be

…(Full text truncated)…

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut