Weighted Indices for Evaluating the Quality of Research with Multiple Authorship

Weighted Indices for Evaluating the Quality of Research with Multiple   Authorship
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Devising an index to measure the quality of research is a challenging task. In this paper, we propose a set of indices to evaluate the quality of research produced by an author. Our indices utilize a policy that assigns the weights to multiple authors of a paper. We have considered two weight assignment policies: positionally weighted and equally weighted. We propose two classes of weighted indices: weighted h-indices and weighted citation h-cuts. Further, we compare our weighted h-indices with the original h-index for a selected set of authors. As opposed to h-index, our weighted h-indices take into account the weighted contributions of individual authors in multi-authored papers, and may serve as an improvement over h-index. The other class of weighted indices that we call weighted citation h-cuts take into account the number of citations that are in excess of those required to compute the index, and may serve as a supplement to h-index or its variants.


💡 Research Summary

The paper tackles a well‑known shortcoming of the classic h‑index: its inability to differentiate the contributions of individual authors in multi‑authored papers. To address this, the authors introduce a systematic weighting framework that assigns a numerical weight to each co‑author based on two distinct policies. The first, a positional weighting scheme, assumes that author order reflects contribution, giving the first author the highest weight and decreasing weights for subsequent authors (e.g., weights proportional to 1/i or 2/(i+1)). The second, an equal weighting scheme, treats all co‑authors as having contributed equally, assigning each a weight of 1/n where n is the total number of authors.

With these weights, the raw citation count (c) of a paper is transformed into a weighted citation count (c′ = w × c) for each author. Two new metrics are then defined:

  1. Weighted h‑index (wh‑index) – identical in definition to the traditional h‑index but calculated using weighted citation counts. An author’s wh‑index is the largest integer h such that the author has at least h papers with weighted citations ≥ h. This index directly incorporates the author’s share of each paper’s impact, thereby reducing the inflation that can occur when a highly cited collaborative work is fully credited to every co‑author.

  2. Weighted citation h‑cut (wh‑cut) – a supplemental measure that captures the “excess” citations beyond the threshold required for the h‑index. Formally, wh‑cut = Σ_{i=1}^{h} max(0, c′_i − h), where the sum runs over the h papers that define the wh‑index. This quantity distinguishes authors who have the same wh‑index but differ in the magnitude of their highly cited papers, offering a finer granularity of research influence.

The authors validate the proposed metrics on a sample of researchers from three disciplines—computer science, physics, and life sciences—using data extracted from Scopus and Web of Science. For each researcher, they compute the traditional h‑index, the wh‑index under both weighting policies, and the corresponding wh‑cut. The empirical findings reveal several patterns:

  • Under positional weighting, fields where first‑author dominance is common (e.g., computer science) show a noticeable increase in wh‑index relative to the classic h‑index, reflecting the greater credit given to primary contributors.
  • Under equal weighting, disciplines with extensive collaborative authorship (e.g., life sciences) exhibit minimal differences between h and wh, indicating that the equal scheme mitigates over‑attribution while preserving fairness.
  • The wh‑cut values vary widely among researchers sharing the same h‑index, highlighting that the excess‑citation component can effectively separate “high‑impact” scholars from those whose citation profiles are more modest.

The discussion acknowledges potential limitations. Positional weighting relies on the assumption that author order mirrors contribution, which is not universally true (e.g., alphabetical listings, joint senior authors). The choice of weighting function is somewhat subjective, and the authors suggest that field‑specific standards or the integration of contribution taxonomies such as CRediT could improve robustness. Moreover, the availability and accuracy of author‑order metadata across bibliographic databases can affect the reliability of the metrics.

In conclusion, the paper presents a coherent, implementable extension to the h‑index that accounts for multi‑author contribution through explicit weighting. By offering both a weighted h‑index and a complementary h‑cut, the framework provides a more nuanced assessment of scholarly impact, suitable for evaluation committees, funding agencies, and individual researchers seeking a fairer representation of their work. Future research directions include dynamic weighting based on declared contributions, longitudinal studies of metric stability, and the exploration of policy‑driven incentives that align author credit with actual research effort.


Comments & Academic Discussion

Loading comments...

Leave a Comment