Ranking research institutions by the number of highly-cited articles per scientist

Reading time: 5 minute
...

📝 Original Info

  • Title: Ranking research institutions by the number of highly-cited articles per scientist
  • ArXiv ID: 1810.12727
  • Date: 2014-06-30
  • Authors: : Abramo, G., D’Angelo, C.A., & Di Costa, F.

📝 Abstract

In the literature and on the Web we can readily find research excellence rankings for organizations and countries by either total number of highly-cited articles (HCAs) or by ratio of HCAs to total publications. Neither are indicators of efficiency. In the current work we propose an indicator of efficiency, the number of HCAs per scientist, which can complement the productivity indicators based on impact of total output. We apply this indicator to measure excellence in the research of Italian universities as a whole, and in each field and discipline of the hard sciences.

💡 Deep Analysis

Figure 1

📄 Full Content

In Abramo & D'Angelo (2014), we provide the definition, measurement operationalization, and underlying theory of an indicator for productivity in research, named Fractional Scientific Strength (FSS). We have now used FSS over the past eight years to rank the performance of Italian professors and universities. FSS embeds both publications and citation counts, and so departs from the traditional bibliometric definitions of productivity as the number of publications per researcher. Instead, the conception of the FSS is that the more researchers publish, and are cited over a period of time, the higher is their productivity.

Productivity is the quintessential indicator of efficiency in any production system. For this, we hold that it should also be the main indicator in the assessment of performance by individual researchers and their institutions. Certainly, it cannot be the only indicator. In designing evaluation systems, the appropriate choice of performance indicators depends on the context and the policy and management objectives intended for the evaluation. The task of the bibliometrician is thus to identify and recommend the indicators most suited to the particular assessment exercise. In addition to productivity, other measures which we typically propose to policy-makers and research administrators include: the rate of concentration of unproductive researchers; the rate of concentration of top scientists (defined as authors of highly-cited publications), and the dispersion of performance within and between and research units. For all these indicators, we produce rankings that inform the decision-maker on the different quality dimensions of the individual scientists, the research units, and the institutions by field, discipline, and as a whole.

In the current work we present and apply a further indicator of performance for the research unit, in some senses complementary to the measure of research productivity (FSS). The new indicator is the number of highly-cited articles (HCAs) per researcher 2 . To better demonstrate the complementary character of the two indicators, we begin from the axiom that is at the basis of the productivity measures for many production systems. In the stock market, for example, the axiom would hold that the performance of two traders investing the same amount of money in two different stock portfolios bearing the same risks, is the same if the rate of return on their investments is the same. The investor can hold a portfolio of size m, where m-1 of stocks earned nothing and only one stock earned n euro. The performance is considered equal to a portfolio where each of the m stocks earns n/m euro, all other factors constant. In the same way, with other conditions equal, a researcher publishing one publication with n citations is considered to have exactly the same productivity as another researcher producing m publications with n/m citations each. The axiomatic concept, of a linear relationship between the scientific impact of articles and the number of their citations, could be debatable: Someone could argue that an article presenting a breakthrough discovery or radical invention, and so cited 1,000 times, is more important than 10 articles presenting incremental advancements of science or technology, each one cited 100 times.

The score by the more popular performance indicators, such as all those based simply on publication counts, and the h-index would rank lower the author of one, albeit highly-cited publication. Our FSS indicator of productivity does consider such cases as indifferent. That is why, we regard as useful to flank it with another indicator that ranks research units or universities by the number of HCAs per researcher. Fundamentally this is still an indicator of productivity (i.e. ratio of output to input), with the difference that here the output of interest is not the overall research impact, but rather the excellent results only. Conventional wisdom would suggest to expect a positive correlation between the rankings by the two indicators at the individual level. In fact, Abramo, Cicero & D’Angelo (2014a) have shown that the most productive researchers (by FSS) are the ones that produce most of the HCAs.

A reasonable doubt to the reader could be whether there is any difference between the new indicator and the “concentration of top scientists”, defined above as the authors of the HCAs. As a matter of fact, the literature suggests that these are indeed different conceptions of the measurement of the scientific excellence of institutions, as reflected in these two formulations, and that both can be usefully applied (Tijssen, 2003). The measurement can be conducted through two distinct approaches: from the perspective of the excellence of the research staff or of that of their research products. The first serves the purpose of identifying the institutions with the highest number of top scientists, regardless of the total number of top articles produced; the seco

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut