Analyzing the disciplinary focus of universities: Can rankings be a one-size-fits-all?

Reading time: 5 minute
...

📝 Original Info

  • Title: Analyzing the disciplinary focus of universities: Can rankings be a one-size-fits-all?
  • ArXiv ID: 1706.02119
  • Date: 2017-06-08
  • Authors: Researchers from original ArXiv paper

📝 Abstract

The phenomenon of rankings is intimately related with the government interest in fiscalizing the research outputs of universities. New forms of managerialism have been introduced into the higher education system, leading to an increasing interest from funding bodies in developing external evaluation tools to allocate funds. Rankings rely heavily on bibliometric indicators. But bibliometricians have been very critical with their use. Among other, they have pointed out the over-simplistic view rankings represent when analyzing the research output of universities, as they consider them as homogeneous ignoring disciplinary differences. Although many university rankings now include league tables by fields, reducing the complex framework of universities' research activity to a single dimension leads to poor judgment and decision making. This is partly because of the influence disciplinary specialization has on research evaluation. This chapter analyzes from a methodological perspective how rankings suppress disciplinary differences which are key factors to interpret correctly these rankings.

💡 Deep Analysis

Deep Dive into Analyzing the disciplinary focus of universities: Can rankings be a one-size-fits-all?.

The phenomenon of rankings is intimately related with the government interest in fiscalizing the research outputs of universities. New forms of managerialism have been introduced into the higher education system, leading to an increasing interest from funding bodies in developing external evaluation tools to allocate funds. Rankings rely heavily on bibliometric indicators. But bibliometricians have been very critical with their use. Among other, they have pointed out the over-simplistic view rankings represent when analyzing the research output of universities, as they consider them as homogeneous ignoring disciplinary differences. Although many university rankings now include league tables by fields, reducing the complex framework of universities’ research activity to a single dimension leads to poor judgment and decision making. This is partly because of the influence disciplinary specialization has on research evaluation. This chapter analyzes from a methodological perspective how r

📄 Full Content

Chapter in Downing, K., F.A. Ganotice (eds). World University Rankings and the Future of Higher Education. IGI Global, pp. 161-185. doi:10.4018/978-1-5225-0819-9.ch009 1

Analyzing the disciplinary focus of universities: Can rankings be a one-size-fits-all? Nicolas Robinson-Garcia EC3metrics spin-off, Spain Evaristo Jimenez-Contreras Universidad de Granada, Spain

ABSTRACT The phenomenon of rankings is intimately related with the government interest in fiscalizing the research outputs of universities. New forms of managerialism have been introduced into the higher education system, leading to an increasing interest from funding bodies in developing external evaluation tools to allocate funds. Rankings rely heavily on bibliometric indicators. But bibliometricians have been very critical with their use. Among other, they have pointed out the over-simplistic view rankings represent when analyzing the research output of universities, as they consider them as homogeneous ignoring disciplinary differences. Although many university rankings now include league tables by fields, reducing the complex framework of universities’ research activity to a single dimension leads to poor judgment and decision making. This is partly because of the influence disciplinary specialization has on research evaluation. This chapter analyzes from a methodological perspective how rankings suppress disciplinary differences which are key factors to interpret correctly these rankings. Keywords: Higher Education, Specialization, Bibliometric Indicators, Research Policy, Research Evaluation, World-Class Universities, Disciplines, Scientific Output, Science Mapping INTRODUCTION
In the last five years we have observed a rapid transformation on the way research policymakers use university rankings. These tools have rapidly been integrated as a new support tool on which to base their decisions. They have reshaped the higher education landscape at a global level and become common elements of politicians and university managers’ discourse (Hazelkorn, 2011). Not only have they become external key factors as a means to attract talent and funds, but they are also used as support tools along with bibliometric techniques and other methodologies based on publication and citation data (Narin, 1976). Their heavy reliance on bibliographic data has stirred the research community as a whole, raising serious concerns on the suitability of such data as a means to measure the ‘overall quality’ of universities (Marginson & Wende, 2007). At the same time, university rankings have caught bibliometricians off guard. Although they use them quite often (i.e., journal rankings), they have traditionally disregarded them for institutional evaluation, focusing on more sophisticated techniques and indicators (Moed et al., 1985). On the other hand, university rankings have been traditionally based on survey data and have not considered the use of bibliometric indicators until recently. Moreover, despite their success in the United States, they have had little presence in the European research policy scenario (Nedeva, Barker & Osman, 2014).

Chapter in Downing, K., F.A. Ganotice (eds). World University Rankings and the Future of Higher Education. IGI Global, pp. 161-185. doi:10.4018/978-1-5225-0819-9.ch009 2

The launch of the Shanghai Ranking in 2003 did not only set up the starting point of the globalization of the higher education landscape, but introduced bibliometric-based measures to rank universities. Surprisingly, the Shanghai or the Times Higher Education World University Rankings and QS Top Universities Rankings were not produced by bibliometricians, not even by practitioners. This caught from the beginning the interest of the bibliometric community who rapidly positioned themselves against the use of these tools. Such strong opposition is resumed in the correspondence maintained between Professor van Raan from Leiden University and the creators of the Shanghai Ranking (Liu, Cheng & Liu, 2005; van Raan, 2005ab). Here, van Raan (2005a) highlights serious methodological and technical concerns which are later emphasized by others (i.e., Billaut, Bouyssou & Vincke, 2009). Such shortcomings have to do with the careless use these rankings make of bibliometric data, neglecting many of the limitations bibliometric databases have, and offering compound indicators of dubious meaning which intend to summarize the global position of universities.

Rankings have evolved from marketing tools which have a great impact on the image of universities and their capacity to attract talent and funds (Bastedo & Bowman, 2010) to research evaluation tools which are used strategically by research policymakers shaping their political agenda (Pusser & Marginson, 2013). However, their strong focus on research and their reliance on bibliometric data, entail important threats and misinterpretation issues which may 1) endanger

…(Full text truncated)…

📸 Image Gallery

cover.png page_2.webp page_3.webp

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut