Comparison of research productivity of Italian and Norwegian professors and universities

Reading time: 5 minute
...

📝 Abstract

This is the first ever attempt of application in a country other than Italy of a research efficiency indicator (FSS), to assess and compare the performance of professors and universities, within and between countries. A special attention has been devoted to the presentation of the methodology developed to set up a common field classification scheme of professors, and to overcome the limited availability of comparable input data. Results of the comparison between countries, carried out in the 2011-2015 period, show similar average performances of professors, but noticeable differences in the distributions, whereby Norwegian professors are more concentrated in the tails. Norway shows notable higher performance in Mathematics and Earth and Space Sciences, while Italy in Biomedical Research and Engineering.

💡 Analysis

This is the first ever attempt of application in a country other than Italy of a research efficiency indicator (FSS), to assess and compare the performance of professors and universities, within and between countries. A special attention has been devoted to the presentation of the methodology developed to set up a common field classification scheme of professors, and to overcome the limited availability of comparable input data. Results of the comparison between countries, carried out in the 2011-2015 period, show similar average performances of professors, but noticeable differences in the distributions, whereby Norwegian professors are more concentrated in the tails. Norway shows notable higher performance in Mathematics and Earth and Space Sciences, while Italy in Biomedical Research and Engineering.

📄 Content

Comparison of research productivity of Italian and Norwegian professors and universities

Authors: Giovanni Abramo1*, Dag W. Aksnes2, Ciriaco Andrea D’Angelo3

Affiliations:

1 Laboratory for Studies in Research Evaluation, Institute for System Analysis and Computer Science (IASI-CNR). National Research Council, Rome, Italy 2 Nordic Institute for Studies in Innovation, Research and Education, Oslo, Norway 3 University of Rome “Tor Vergata”, Dept of Engineering and Management, Rome, Italy

Abstract This is the first ever attempt of application in a country other than Italy of a research efficiency indicator (FSS), to assess and compare the performance of professors and universities, within and between countries. A special attention has been devoted to the presentation of the methodology developed to set up a common field classification scheme of professors, and to overcome the limited availability of comparable input data. Results of the comparison between countries, carried out in the 2011-2015 period, show similar average performances of professors, but noticeable differences in the distributions, whereby Norwegian professors are more concentrated in the tails. Norway shows notable higher performance in Mathematics and Earth and Space Sciences, while Italy in Biomedical Research and Engineering.

Keywords FSS; research evaluation; bibliometrics.


  • corresponding author

Acknowledgement We wish to thank Gunnar Sivertsen and Lin Zhang for acting as catalysts in this research project

2

  1. Introduction

In their article “A farewell to the MNCS and like size-independent indicators” (Abramo & D’Angelo, 2016a), published in a special section of the Journal of Informetrics (2016, vol.10, issue 2), the authors: i) refuted the validity of the “Mean Normalized Citation Score” (MNCS) and all similar per-publication citation indicators as measures of research performance; ii) urged the adoption of efficiency (output to input ratio) indicators, such as the “Fractional Scientific Strength” or FSS (Abramo & D’Angelo, 2014) and highly cited articles per scientist (Abramo & D’Angelo, 2015); and iii) made recommendations on how to expedite the shift to the proposed new paradigm of research performance measurements, in place of the old one of the MNCS and like. The manuscript has elicited a number of responses by eminent scholars in the field. Most respondents argued that accessing input data could pose formidable problems in most countries. Furthermore, even if input data were accessible, their quality and cross-nation comparability would be doubtful. “Collecting standardized input data requires a high degree of coordination between countries, and it is probably not realistic to expect that this degree of coordination can be achieved” (Waltman, Van Eck, Visser, & Wouters, 2016). Thelwall (2016) feared that the input data provided by research institutions could easily be gamed. Bornmann and Haunschild (2016) warned about possible uncontrollable uses of collected input data: “Since we are not only scientometricians (who appreciate the availability of data), but also researchers ourselves, we should not support such transparent systems which form the ‘glass researcher’”. For the large part, these are difficulties and apprehensions that Abramo and D’Angelo too can share. The point they raise is whether they are sufficient to induce scientometricians to renounce embarking on a path that can no longer be avoided (Abramo & D’Angelo, 2016b). What we propose in this work is the first attempt ever to apply the FSS indicator of research performance of professors and universities to a country other than Italy, e.g. Norway. Similarly to Italy, accessing professors’ identity and their affiliation in Norway is quite straightforward. The main problem we encountered was the alignment of the research field classification of professors, as the classification schemes in the two countries are rather different. As we show in Section 4, field classification of professors is in fact critical to compare performance, both at the individual and aggregate levels. The comparison of performance of professors from the two countries is not an end per se, but it is also a means to make in each country the performance measures more robust in those fields where the number of observations in either country is low. The principal aims of the work are to present all the difficulties we have encountered to achieve comparable measurements of performance in the two countries, and to show how we have overcome such difficulties. We have devoted then special care in describing the procedure to operationalize the measurements, and all the limits and assumptions involved. This should make hopeful future replications of the exercise in other countries more straightforward. In this study, we have measured and compared research productivity of Ital

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut