Comparing People with Bibliometrics

Comparing People with Bibliometrics
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Bibliometric indicators, citation counts and/or download counts are increasingly being used to inform personnel decisions such as hiring or promotions. These statistics are very often misused. Here we provide a guide to the factors which should be considered when using these so-called quantitative measures to evaluate people. Rules of thumb are given for when begin to use bibliometric measures when comparing otherwise similar candidates.


šŸ’” Research Summary

The paper provides a practical guide for using bibliometric indicators—primarily citation counts and download statistics—in personnel decisions such as hiring, promotion, and tenure evaluation. It begins by acknowledging the growing reliance on these quantitative measures as seemingly objective proxies for research quality and impact, while simultaneously warning that they are frequently misapplied. The authors dissect the sources of bias inherent in bibliometrics: disciplinary differences in citation practices, variations in database coverage (e.g., Web of Science, Scopus, Google Scholar), the effect of open‑access publishing on download numbers, and the time lag required for citations to accumulate.

A two‑dimensional analytical framework is proposed. The first dimension stresses the necessity of a ā€œlike‑for‑likeā€ comparison—candidates must be matched on field, career stage, and research environment to make any bibliometric comparison meaningful. The authors recommend constructing a fine‑grained field matrix that goes beyond broad disciplinary categories to capture sub‑field nuances. The second dimension introduces ā€œtime normalizationā€ to adjust for the fact that newer publications have had less opportunity to be cited. Metrics such as annualized citation rates, citation velocity, and growth curves are suggested to level the playing field across different publication years.

From this foundation, the paper distills seven concrete rules for integrating bibliometrics into candidate assessments:

  1. Consider both total publication count and average citations per paper – evaluating productivity together with efficiency prevents over‑valuing a single highly cited work.
  2. Weight the contribution of ā€œcoreā€ papers – assess the proportion of a candidate’s output that appears in the top 10‑20 % of journals in the field, using relative impact rather than raw journal impact factors.
  3. Account for co‑authorship and author order – estimate individual contribution by examining the number of co‑authors, first‑author versus senior‑author positions, and the typical authorship conventions of the discipline.
  4. Screen for excessive self‑citations – a self‑citation rate above roughly 10 % may indicate inflation and should be discounted.
  5. Use download or view counts as an early‑interest signal, but pair them with conversion rates to citations – a low citation‑to‑download ratio can reveal papers that attract attention without lasting scholarly influence.
  6. Incorporate non‑traditional outputs – research grant funding, patents, industry collaborations, policy briefs, and other impact pathways should be evaluated alongside publications, especially for applied fields.
  7. Blend quantitative metrics with qualitative assessments – recommendation letters, interview performance, research statements, and expert panel reviews should be combined using a weighted scoring system that reflects institutional priorities.

Each rule is illustrated with practical examples, and the authors provide cautions about over‑reliance on any single metric. For instance, they advise against using journal impact factor as an absolute benchmark; instead, compare a journal’s impact to the field’s median. They also note that a download‑to‑citation conversion below 5 % may signal high curiosity but limited scholarly uptake.

The discussion section highlights ethical and equity concerns. Bibliometric reliance can unintentionally reinforce ā€œcitation cartels,ā€ encourage excessive self‑citation, and disadvantage researchers from under‑represented groups, non‑English‑speaking regions, or emerging disciplines that are poorly indexed. The authors argue that decision‑makers must remain aware of these systemic biases and apply corrective measures, such as field‑normalized scores and transparent weighting schemes.

In conclusion, the paper emphasizes that bibliometric indicators are valuable support tools but should never replace a holistic, multi‑dimensional evaluation process. Effective personnel decisions require a balanced integration of quantitative data, expert judgment, and contextual understanding of each candidate’s research trajectory and broader contributions to science and society.


Comments & Academic Discussion

Loading comments...

Leave a Comment