The Source-Normalized Impact per Paper (SNIP) is a valid and sophisticated indicator of journal citation impact

The Source-Normalized Impact per Paper (SNIP) is a valid and   sophisticated indicator of journal citation impact
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper is a reply to the article “Scopus’s Source Normalized Impact per Paper (SNIP) versus a Journal Impact Factor based on Fractional Counting of Citations”, published by Loet Leydesdorff and Tobias Opthof (arXiv:1004.3580v2 [cs.DL]). It clarifies the relationship between SNIP and Elsevier’s Scopus. Since Leydesdorff and Opthof’s description of SNIP is not complete, it indicates four key differences between SNIP and the indicator proposed by the two authors, and argues why the former is more valid than the latter. Nevertheless, the idea of fractional citation counting deserves further exploration. The paper discusses difficulties that arise if one attempts to apply this principle at the level of individual (citing) papers.


💡 Research Summary

The paper by Henk F. Moed is a rebuttal to the article by Ley­des­dorf and Op­thof (L&O) that proposed a journal impact indicator based on fractional citation counting (FCC). Moed first situates the discussion within the broader problem of field‑normalized citation metrics: citation practices differ dramatically across disciplines (e.g., chemistry papers typically contain >50 references while mathematics papers may have only ~10), making raw citation counts incomparable. He cites Garfield’s classic warning that evaluation studies must compensate for such disparities.

Moed then explains the principle of “citing‑side normalization” (Zitt & Small, 2008) and describes how his own Source‑Normalized Impact per Paper (SNIP) implements it. SNIP is defined as the ratio of two averages: (1) the average number of citations received in a given year by a journal’s 1‑3‑year‑old articles, and (2) the average number of 1‑3‑year‑old references appearing in the set of papers that cite that journal (the journal’s “subject field”). The subject field is constructed from all papers that cite at least one article from the journal within a 1‑10‑year window, thereby avoiding a bias toward recent citations. Crucially, only references to sources indexed in Scopus are counted in the denominator, which corrects for differences in database coverage across fields. Mo

The L&O indicator, by contrast, applies FCC at the level of each citing paper: each citation from a paper with n references receives a weight of 1/n, and the journal’s score is the average of these weighted citations. This is an “average of ratios” approach, whereas SNIP is a “ratio of averages.”

Moed identifies four substantive differences that make SNIP more valid: (i) SNIP’s numerator and denominator use the same citation window (1‑3 years), whereas L&O’s denominator mixes all references regardless of age, penalizing fields with long reference lists and slow citation “immediacy”; (ii) SNIP restricts the denominator to references in Scopus‑indexed journals, avoiding systematic undervaluation of fields with poorer database coverage (e.g., mathematics, engineering, humanities); (iii) SNIP’s subject field definition does not require the citing paper to contain recent references, thus preventing bias toward papers that cite only recent literature; (iv) SNIP’s scale is deliberately aligned with the familiar Journal Impact Factor, making interpretation easier for users.

Moed also discusses two problems that would arise if FCC were applied at the paper level within the SNIP framework. First, many papers in the subject field may have zero 1‑3‑year‑old references (r = 0). Excluding them would bias the estimate of citation potential, yet assigning them a weight of zero would also be inappropriate. Second, weighting each citation by 1/r implies that citations from papers with long reference lists count less than those from papers with short lists, even though both are drawn from the same field. This runs counter to the rationale of field‑level normalization, where the average propensity to cite should be the same for all papers in the field.

In the concluding section Moed notes that L&O’s claim that statistical significance testing is impossible for “ratio of averages” indicators is unfounded (Glänzel, 2010). He concedes that simplicity and elegance are matters of taste, but stresses that validity must be the decisive criterion. He calls for further research on FCC, both statistical and theoretical, and suggests comparative studies involving SNIP, SJR, and other journal metrics across a larger set of journals than the five examined by L&O.

Overall, the paper argues that SNIP, as implemented for Scopus, provides a more robust, field‑normalized measure of journal citation impact than the fractional counting approach proposed by Ley­des­dorf and Op­thof, while also highlighting avenues for future methodological development.


Comments & Academic Discussion

Loading comments...

Leave a Comment