Twenty Hirsch index variants and other indicators giving more or less preference to highly cited papers
The Hirsch index or h-index is widely used to quantify the impact of an individual’s scientific research output, determining the highest number h of a scientist’s papers that received at least h citations. Several variants of the index have been proposed in order to give more or less preference to highly cited papers. I analyse the citation records of 26 physicists discussing various suggestions, in particular A, e, f, g, h(2), h_w, h_T, \hbar, m, {\pi}, R, s, t, w, and maxprod. The total number of all and of all cited publications as well as the highest and the average number of citations are also compared. Advantages and disadvantages of these indices and indicators are discussed. Correlation coefficients are determined quantifying which indices and indicators yield similar and which yield more deviating rankings of the 26 datasets. For 6 datasets the determination of the indices and indicators is visualized.
💡 Research Summary
The paper provides a systematic comparison of twenty variants of the Hirsch index (h‑index) and several related bibliometric indicators that differ in the degree to which they reward highly cited papers. Using citation records from 26 physicists, the author computes the traditional h‑index together with a suite of proposed alternatives—denoted A, e, f, g, h(2), h_w, h_T, \hbar, m, π, R, s, t, w, and maxprod—as well as four basic descriptive measures (total number of publications, number of cited publications, maximum citations, and average citations per paper).
The analysis begins with a clear methodological description: each scientist’s publication list and citation counts were extracted from a standard bibliographic database, and all indices were calculated uniformly. The variants fall into two broad conceptual families. The first family (g, h_T, maxprod, etc.) amplifies the contribution of a few very highly cited papers by using squared citations, products of citations, or cumulative citation thresholds. The second family (e, f, s, w, etc.) smooths the distribution, giving more weight to the bulk of moderately cited works and thereby reducing the dominance of outliers. Additional indices such as m (h divided by career length) and π (h normalized by mean citations) attempt to correct for career duration and overall citation intensity.
Correlation analysis (Pearson and Spearman) reveals that the traditional h‑index is tightly linked to g, h_T, and maxprod (r > 0.9), indicating that these three measures essentially rank scientists in a similar way when a few blockbuster papers dominate the record. In contrast, the “egalitarian” indices e, f, s, and w show moderate correlations with h (r ≈ 0.5–0.7), reflecting substantial rank reshuffling for scientists whose output consists mainly of mid‑range citations. The m‑index and π‑index display weaker but still meaningful correlations, highlighting their role in adjusting for career length and average citation density.
To illustrate the practical impact of these differences, the author visualizes six representative cases: the three scientists with the highest h‑scores and the three with the lowest. In the high‑h group, g, h_T, and maxprod closely track h, while the egalitarian indices fall short because the citation distribution is heavily skewed toward a few top papers. Conversely, in the low‑h group, e, f, s, and w often exceed h, reflecting a more balanced citation profile where many papers receive modest attention. These plots make it evident that the choice of metric can dramatically alter perceived performance, especially for early‑career researchers or those working in subfields with lower citation rates.
The discussion weighs the pros and cons of each class of indicators. Metrics that privilege highly cited papers are valuable for quickly identifying researchers whose work has generated major impact, but they risk marginalizing emerging scholars and those whose contributions are spread across many modestly cited articles. Egalitarian metrics capture overall productivity and can be more forgiving to a broader set of contributions, yet they may overstate the influence of work that has not achieved significant uptake in the literature. The m‑index and π‑index provide useful normalization for career stage and field‑specific citation practices, making them attractive for comparative assessments across heterogeneous groups.
In conclusion, the author argues against reliance on any single bibliometric indicator. Instead, a composite approach tailored to the evaluation goal is recommended: combine h‑type measures (h, g, maxprod) when the focus is on breakthrough impact, and supplement them with egalitarian indices (e, f, s, w) and normalization metrics (m, π) when assessing sustained productivity, career development, or cross‑disciplinary performance. The paper also cautions that citation databases have inherent limitations and that disciplinary citation cultures must be taken into account when interpreting any of these metrics.
Comments & Academic Discussion
Loading comments...
Leave a Comment