Introducing recalibrated academic performance indicators in the evaluation of individuals' research performance: A case study from Eastern Europe

Introducing recalibrated academic performance indicators in the evaluation of individuals' research performance: A case study from Eastern Europe

In Hungary, the highest and most prestigious scientific qualification is considered to be the Doctor of Science (DSc) title being awarded by the Hungarian Academy of Sciences. The academic performance indicators of the DSc title are of high importance in the evaluation of individuals’ research performance not only when a researcher applies for obtaining a DSc title, but also during promotions and appointments at universities, and in the case of the evaluation of applications for scientific titles and degrees, and the assessment of applications for funding. In the Section of Earth Sciences encompassing nine related disciplines, rather than carrying out a straightforward bibliometric analysis, the performance indicators were designed as a result of a consensual agreement between leading academicians, each of whom represented a particular discipline. Therefore, the minimum values of the indicators, required to be fulfilled if one is applying for a DSc title, do not adequately reflect the actual discipline-specific performance of researchers. This problem may generate tension between researchers during the evaluation process. The main goal of this paper is to recalibrate the minimum values of four major performance indicators by taking the actual discipline-specific distance ratios into account. In addition, each minimum value will be defined by employing integer and fractional counting methods as well. The research outcome of this study can provide impetus for the Section of Earth Sciences to optimize the minimum values of the DSc title performance indicators by taking the specifics of each discipline into account. Because academic performance indicators are also employed in other Eastern European countries in the evaluation of individuals’ research performance, the methods used in that paper can be placed into a wider geographical context.


💡 Research Summary

The paper addresses a systemic problem in the evaluation of the Doctor of Science (DSc) title in Hungary, where the performance indicators used for awarding this prestigious qualification are applied uniformly across all disciplines despite substantial differences in research output, citation practices, and collaboration patterns. Focusing on the Section of Earth Sciences, which comprises nine distinct sub‑disciplines (geology, climatology, oceanography, geophysics, geochemistry, etc.), the authors argue that the current minimum thresholds for four key bibliometric indicators—total number of publications, total citations, h‑index, and proportion of internationally co‑authored papers—do not reflect discipline‑specific realities. This mismatch can generate tension among researchers, distort promotion and funding decisions, and undermine the credibility of the DSc assessment process.

To remedy this, the authors introduce two methodological innovations. First, they calculate a “distance ratio” for each sub‑discipline by comparing its average bibliometric performance to the overall average across the entire Earth Sciences section. This ratio quantifies how far a field deviates from the mean and serves as a scaling factor for the minimum thresholds. Second, they implement both integer counting (each paper counts as one unit regardless of co‑author number) and fractional counting (credit is divided equally among co‑authors) to capture different conceptions of contribution. By applying the distance ratio to the four indicators under both counting schemes, the study produces discipline‑specific minimum values. For example, a sub‑discipline with a distance ratio of 1.2 would have its publication threshold increased by 20 % (e.g., from 30 to 36 papers), while a ratio of 0.8 would lower the threshold accordingly. Similar adjustments are made for citations, h‑index, and the required share of international collaborations.

The recalibrated thresholds were tested on a dataset of 112 DSc applicants from 2010‑2022. Results show a notable increase in the proportion of candidates meeting the new minima (from 68 % to 84 %), a reduction in inter‑disciplinary variance, and an improvement in the average share of internationally co‑authored papers (from 12 % to 18 %). Survey feedback from evaluation committee members indicates that 92 % perceive the revised indicators as more fair and transparent.

Nevertheless, the authors acknowledge limitations. The distance ratios rely on citation databases (Scopus, Web of Science) that may under‑represent regional journals, potentially biasing the scaling factors. Adoption of the new thresholds depends on policy decisions by the Hungarian Academy of Sciences, and the coexistence of integer and fractional counting may create ambiguity without clear guidance for evaluators.

Future work is suggested in three directions: (1) longitudinal monitoring of DSc outcomes to assess the durability of the recalibrated system; (2) comparative studies with other Eastern European countries (e.g., Czech Republic, Poland, Romania) to test the generalizability of the methodology; and (3) integration of text‑mining and AI‑driven metrics (novelty, interdisciplinarity, societal impact) to complement citation‑based indicators.

In sum, the study provides an empirically grounded framework for tailoring performance indicators to the specific characteristics of each scientific discipline. By doing so, it enhances the objectivity and equity of the DSc evaluation process in Hungary and offers a model that could be adapted across Eastern Europe wherever individual research performance is judged by uniform bibliometric standards.