Indicators as judgment devices: Citizen bibliometrics in biomedicine and economics
📝 Abstract
The number of publications has been a fundamental merit in the competition for academic positions since the late 18th century. Today, the simple counting of publications has been supplemented with a whole range of bibliometric measures, which supposedly not only measures the volume of research but also its impact. In this study, we investigate how bibliometrics are used for evaluating the impact and quality of publications in two specific settings: biomedicine and economics. Our study exposes the extent and type of metrics used in external evaluations of candidates for academic positions at Swedish universities. Moreover, we show how different bibliometric indicators, both explicitly and implicitly, are employed to value and rank candidates. Our findings contribute to a further understanding of bibliometric indicators as judgment devices that are employed in evaluating individuals and their published works within specific fields. We also show how expertise in using bibliometrics for evaluative purposes is negotiated at the interface between domain knowledge and skills in using indicators. In line with these results we propose that the use of metrics in this context is best described as a form of citizen bibliometrics - an underspecified term which we build upon in the paper.
💡 Analysis
The number of publications has been a fundamental merit in the competition for academic positions since the late 18th century. Today, the simple counting of publications has been supplemented with a whole range of bibliometric measures, which supposedly not only measures the volume of research but also its impact. In this study, we investigate how bibliometrics are used for evaluating the impact and quality of publications in two specific settings: biomedicine and economics. Our study exposes the extent and type of metrics used in external evaluations of candidates for academic positions at Swedish universities. Moreover, we show how different bibliometric indicators, both explicitly and implicitly, are employed to value and rank candidates. Our findings contribute to a further understanding of bibliometric indicators as judgment devices that are employed in evaluating individuals and their published works within specific fields. We also show how expertise in using bibliometrics for evaluative purposes is negotiated at the interface between domain knowledge and skills in using indicators. In line with these results we propose that the use of metrics in this context is best described as a form of citizen bibliometrics - an underspecified term which we build upon in the paper.
📄 Content
1 Indicators as judgment devices: Citizen bibliometrics in biomedicine and economics
Björn Hammarfelt 1,2 & Alexander D. Rushforth2
The number of publications has been a fundamental merit in the competition for academic positions since the late 18th century. Today, the simple counting of publications has been supplemented with a whole range of bibliometric measures, which supposedly not only measures the volume of research but also its impact. In this study, we investigate how bibliometrics are used for evaluating the impact and quality of publications in two specific settings: biomedicine and economics. Our study exposes the extent and type of metrics used in external evaluations of candidates for academic positions at Swedish universities. Moreover, we show how different bibliometric indicators, both explicitly and implicitly, are employed to value and rank candidates. Our findings contribute to a further understanding of bibliometric indicators as ‘judgment devices’ that are employed in evaluating individuals and their published works within specific fields. We also show how ’expertise’ in using bibliometrics for evaluative purposes is negotiated at the interface between domain knowledge and skills in using indicators. In line with these results we propose that the use of metrics in this context is best described as a form of ‘citizen bibliometrics’ – an underspecified term which we build upon in the paper.
- Introduction Since the 1970s much of the promise of evaluative bibliometrics (Narin 1976) has been premised on the notion of tempering the subjective and cognitive biases of peer review, so much so that it has often been imagined as an alternative mode of evaluating. In practice however, bibliometrics tends to supplement expert decision-making rather than supplant it (Moed 2007; van Raan 1996). Indeed calls to use (advanced) bibliometrics as part of ‘informed peer review’ processes has been posited as a means of mitigating the weaknesses of both approaches (Butler 2007). At the same time, there are often assumptions made that simple output indicators like Journal Impact Factor (JIF), h-index, and journal ranking lists are commonly used in decision–making contexts. Despite such assumptions, to date few have responded to earlier calls by Woolgar (1991) to study actual uses of indicators in peer review and other decision–making contexts.
1Swedish School of Library and Information Science, SSLIS, University of Borås, SE-501 90 Borås, email: bjorn.hammarfelt@hb.se(Corresponding author)
2 CWTS, Leiden University, 2333 AL Leiden, The Netherlands, e-mail: a.d.rushforth.cwts.leidenuniv.nl
2 Whilst some attention has been directed towards researchers’ attitudes towards bibliometrics (Aksnes and Rip 2009; Buela-Casal and Zych 2012) fewer still have studied actual uses of bibliometrics and its consequences for knowledge production (Rushforth and de Rijcke 2015).
Studies regarding the formalized uses of metrics in research assessments are more common, and a literature looking at practices and effects is gradually emerging (de Rijcke, Wouters, Rushforth, Franssen, and Hammarfelt 2015). While acknowledging the importance of these approaches we suggest that metrics might have even more profound influence on the micro–level of individuals and smaller groups. For this reason it is important to engage with the uses of metrics in high–stake contexts, where employing bibliometric indicators might have major consequences for the individual researcher.
Our main focus in this paper is the uses of metrics in forming judgments of applicants for academic positions. More specifically we investigate how bibliometric indicators are used for ranking candidates in two specific settings: biomedicine and economics. Based on qualitative analysis of written assessment reports of applicants, a first set of issues addressed in our study concerns questions such as: To what extent are bibliometric measures used to evaluate candidates for academic positions? In what ways are these measures used? And how are different indicators compared, negotiated and discussed?
Our findings hope to elucidate the extent and type of bibliometrics used for evaluation purposes, and in doing so open-up understanding of how individuals are evaluated. Our selection of fields is motivated by an ambition to study disciplines that both draw on metrics, but which differ in their social and intellectual structure. Building on the works of Whitley (Whitley, 2000; Whitley and Gläser 2008) we infer that differences in the organization of research fields is likely to have direct consequences for the forming of evaluation practices. The degree of dependency, heterogeneity in research practices and publication strategies, as well as the agreement on research goals and methods are some of the factors that are likely to influence the assessment of research. The widespr
This content is AI-processed based on ArXiv data.