Efficiency of research performance and the glass researcher
Abramo and D’Angelo (in press) doubt the validity of established size-independent indicators measuring citation impact and plead in favor of measuring scientific efficiency (by using the Fractional Scientific Strength indicator). This note is intended to comment on some questionable and a few favorable approaches in the paper by Abramo and D’Angelo (in press).
💡 Research Summary
Abramo and D’Angelo open their paper by questioning the validity of the widely used size‑independent citation impact indicators such as mean citation count, MNCS, and the proportion of top‑10 % papers. Their central argument is that these metrics ignore the input side of research—most notably the amount of funding and the scale of collaboration—thereby producing distorted assessments, especially for large, multi‑author, or high‑cost projects. To address this gap they propose a new indicator, Fractional Scientific Strength (FSS), which attempts to capture “scientific efficiency” by relating output (citations) to input (research expenditures) on a fractional basis.
The construction of FSS proceeds in two steps. First, each paper’s citation count is fractionally allocated among its authors, assuming equal contribution, and then normalized by the amount of research money attributed to that paper. In practice this means dividing the total citations of a paper by the number of co‑authors and by the per‑paper share of the funding pool, yielding a citation‑per‑euro (or per‑dollar) figure. Second, the fractional citation values for all papers produced by a researcher, department, or institution are summed and finally divided by the total research budget of the unit under evaluation. The resulting ratio is interpreted as the efficiency with which the unit converts financial resources into scholarly impact.
The authors ground their proposal in production‑function theory, drawing an analogy between economic output per unit of capital and scholarly output per unit of research investment. They argue that, just as firms are judged by profit per unit of capital, researchers should be judged by citation impact per unit of funding. The paper then moves to an empirical test using data from the Italian National Research Council (CNR) covering the period 2008‑2017. Detailed records of research grants, author lists, and citation counts (sourced from Web of Science) are merged to compute both traditional size‑independent indicators and the new FSS for each researcher.
The empirical results reveal several noteworthy patterns. First, there is a moderate correlation between FSS and conventional impact metrics, indicating that the two families of measures capture overlapping but not identical dimensions of performance. Second, researchers who belong to large collaborative networks often score highly on average citation metrics but receive lower FSS values because their per‑paper funding share is diluted across many co‑authors. Conversely, some investigators with modest publication counts achieve relatively high FSS scores because they produce highly cited work with comparatively low funding inputs. Third, at the institutional level, departments that appear elite under traditional citation rankings sometimes fall behind when evaluated by FSS, suggesting that the latter metric penalizes “expensive” research strategies that do not yield proportionally higher citation returns.
Abramo and D’Angelo discuss the policy implications of adopting an efficiency‑oriented metric. In a funding environment where budgets are increasingly constrained, FSS could serve as a decision‑support tool for allocating resources to the most “cost‑effective” research units. It also offers a quantitative basis for performance‑based funding schemes that aim to reward not just impact but impact relative to investment. Moreover, the fractional counting approach encourages a more nuanced view of authorship, potentially fostering greater transparency about individual contributions.
However, the authors are careful to acknowledge several limitations. The first concerns data quality: accurate, comprehensive accounting of research expenditures is not universally available, and many projects receive indirect or private funding that is difficult to capture. The second concerns the equal‑contribution assumption embedded in the fractional counting; in reality, author contributions are highly heterogeneous, and many journals now require contribution statements that could be leveraged for more precise weighting. The third limitation is disciplinary heterogeneity: citation practices, typical team sizes, and funding intensities vary dramatically across fields, which may render cross‑field comparisons of FSS problematic without field‑specific normalization. Finally, the focus on citations as the sole output measure excludes other valuable outcomes such as patents, software, policy influence, or societal impact, potentially biasing the efficiency assessment toward fields where citation accumulation is rapid.
In conclusion, Abramo and D’Angelo make a compelling case that traditional size‑independent citation indicators are insufficient for evaluating research efficiency, and they introduce FSS as a promising alternative that explicitly incorporates financial inputs. Their empirical illustration demonstrates that FSS can re‑rank researchers and institutions in ways that align more closely with cost‑effectiveness considerations. Yet, the practical adoption of FSS will require addressing methodological challenges—particularly the need for richer input data, more sophisticated author‑contribution weighting, and field‑specific calibration. Future work could extend the framework to incorporate multiple input dimensions (e.g., personnel time, equipment, infrastructure) and multiple output dimensions (e.g., patents, data sets, societal impact) to produce a truly multidimensional efficiency index. Only then can the academic community move toward evaluation practices that balance impact, fairness, and responsible stewardship of public research funds.
Comments & Academic Discussion
Loading comments...
Leave a Comment