A review of the characteristics of 108 author-level bibliometric indicators
An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.
💡 Research Summary
The paper provides a systematic review of 108 author‑level bibliometric indicators that have been proposed to assess individual research performance. Recognizing the growing demand for quantitative evaluation of scholars, the authors first categorize the indicators into five major dimensions: (1) productivity (e.g., total papers, annual output), (2) citation‑based impact (total citations, average citations, the h‑index and its numerous derivatives such as g‑index, m‑index, hg‑index), (3) career‑duration and sustainability (research age, citation growth over time), (4) collaboration and network characteristics (co‑author count, proportion of international co‑authorship, network centrality measures), and (5) prestige or value (journal impact‑factor weighting, patent linkage, societal impact).
For each indicator the authors evaluate three aspects: (a) theoretical rationale – what aspect of scholarly activity the metric is intended to capture; (b) computational complexity – ranging from simple counts that can be extracted directly from bibliographic databases, through weighted‑average or field‑normalised formulas that require additional reference data, to advanced models that involve time‑series regression, network analysis, or machine‑learning optimisation; and (c) practical usability – data requirements, need for author disambiguation, and the level of technical expertise required for implementation.
The analysis highlights several recurring tensions. Simple count‑based metrics are easy to compute but can over‑estimate productivity when many papers are co‑authored or when disciplinary publication cultures differ. Fractional counting and weighted co‑authorship indices have been introduced to mitigate this bias, yet they increase computational burden and depend on reliable author‑contribution data. The classic h‑index, while popular for its blend of productivity and impact, favours senior scholars and penalises early‑career researchers; its many variants (g‑index, m‑index, etc.) attempt to address specific shortcomings but often add little substantive information while complicating interpretation.
Data source heterogeneity is another critical issue. Web of Science, Scopus, and Google Scholar differ in coverage, citation windows, and author‑identification algorithms. Consequently, the same indicator may yield divergent values across databases, especially for interdisciplinary or non‑English publications. The authors note that persistent author identifiers such as ORCID improve disambiguation, but full integration remains incomplete.
From an applied perspective, the review finds that low‑complexity indicators (raw counts, basic h‑index) are suitable for rapid screening, whereas high‑complexity, field‑normalised or network‑based metrics are best reserved for in‑depth evaluation where the evaluator has access to specialised software or scripting capabilities. The choice of metric should therefore be driven by the specific decision context (hiring, promotion, grant allocation) and the technical capacity of the assessment team.
In concluding, the paper proposes three guiding principles for future indicator development: (1) transparency – full disclosure of formulas, weighting schemes, and assumptions; (2) reproducibility – reliance on openly accessible data and robust author‑identification methods; and (3) multidimensionality – designs that simultaneously capture productivity, impact, collaboration, and sustainability rather than collapsing all aspects into a single scalar. By adhering to these principles, new metrics can meaningfully complement or replace existing ones, leading to fairer, more nuanced assessments of individual scholarly contributions.
Overall, the authors argue that no single bibliometric indicator can fully represent a researcher’s performance. A thoughtful combination of complementary metrics, matched to the evaluation purpose and data environment, offers the most reliable and equitable approach to author‑level assessment.