Comparing People with Bibliometrics
Bibliometric indicators, citation counts and/or download counts are increasingly being used to inform personnel decisions such as hiring or promotions. These statistics are very often misused. Here we provide a guide to the factors which should be considered when using these so-called quantitative measures to evaluate people. Rules of thumb are given for when begin to use bibliometric measures when comparing otherwise similar candidates.
š” Research Summary
The paper provides a practical guide for using bibliometric indicatorsāprimarily citation counts and download statisticsāin personnel decisions such as hiring, promotion, and tenure evaluation. It begins by acknowledging the growing reliance on these quantitative measures as seemingly objective proxies for research quality and impact, while simultaneously warning that they are frequently misapplied. The authors dissect the sources of bias inherent in bibliometrics: disciplinary differences in citation practices, variations in database coverage (e.g., Web of Science, Scopus, Google Scholar), the effect of openāaccess publishing on download numbers, and the time lag required for citations to accumulate.
A twoādimensional analytical framework is proposed. The first dimension stresses the necessity of a ālikeāforālikeā comparisonācandidates must be matched on field, career stage, and research environment to make any bibliometric comparison meaningful. The authors recommend constructing a fineāgrained field matrix that goes beyond broad disciplinary categories to capture subāfield nuances. The second dimension introduces ātime normalizationā to adjust for the fact that newer publications have had less opportunity to be cited. Metrics such as annualized citation rates, citation velocity, and growth curves are suggested to level the playing field across different publication years.
From this foundation, the paper distills seven concrete rules for integrating bibliometrics into candidate assessments:
- Consider both total publication count and average citations per paper ā evaluating productivity together with efficiency prevents overāvaluing a single highly cited work.
- Weight the contribution of ācoreā papers ā assess the proportion of a candidateās output that appears in the top 10ā20āÆ% of journals in the field, using relative impact rather than raw journal impact factors.
- Account for coāauthorship and author order ā estimate individual contribution by examining the number of coāauthors, firstāauthor versus seniorāauthor positions, and the typical authorship conventions of the discipline.
- Screen for excessive selfācitations ā a selfācitation rate above roughly 10āÆ% may indicate inflation and should be discounted.
- Use download or view counts as an earlyāinterest signal, but pair them with conversion rates to citations ā a low citationātoādownload ratio can reveal papers that attract attention without lasting scholarly influence.
- Incorporate nonātraditional outputs ā research grant funding, patents, industry collaborations, policy briefs, and other impact pathways should be evaluated alongside publications, especially for applied fields.
- Blend quantitative metrics with qualitative assessments ā recommendation letters, interview performance, research statements, and expert panel reviews should be combined using a weighted scoring system that reflects institutional priorities.
Each rule is illustrated with practical examples, and the authors provide cautions about overāreliance on any single metric. For instance, they advise against using journal impact factor as an absolute benchmark; instead, compare a journalās impact to the fieldās median. They also note that a downloadātoācitation conversion below 5āÆ% may signal high curiosity but limited scholarly uptake.
The discussion section highlights ethical and equity concerns. Bibliometric reliance can unintentionally reinforce ācitation cartels,ā encourage excessive selfācitation, and disadvantage researchers from underārepresented groups, nonāEnglishāspeaking regions, or emerging disciplines that are poorly indexed. The authors argue that decisionāmakers must remain aware of these systemic biases and apply corrective measures, such as fieldānormalized scores and transparent weighting schemes.
In conclusion, the paper emphasizes that bibliometric indicators are valuable support tools but should never replace a holistic, multiādimensional evaluation process. Effective personnel decisions require a balanced integration of quantitative data, expert judgment, and contextual understanding of each candidateās research trajectory and broader contributions to science and society.
Comments & Academic Discussion
Loading comments...
Leave a Comment