Integrated Impact Indicators (I3) compared with Impact Factors (IFs): An alternative research design with policy implications
In bibliometrics, the association of “impact” with central-tendency statistics is mistaken. Impacts add up, and citation curves should therefore be integrated instead of averaged. For example, the journals MIS Quarterly and JASIST differ by a factor of two in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an integrated impact indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or instititutional units such as nations, universities, etc., because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two ISI Subject Categories (“Information Science & Library Science” and “Multidisciplinary Sciences”). The LIS set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified.
💡 Research Summary
The paper critically examines the widely used journal Impact Factor (IF), arguing that its reliance on a two‑year average of citations obscures the true “impact” of scholarly output. The authors illustrate the problem with two journals in the Information Science & Library Science (LIS) category: MIS Quarterly, which has an IF roughly twice that of the Journal of the American Society for Information Science and Technology (JASIST), yet the top‑most cited papers in JASIST receive substantially more citations than those in MIS Quarterly. This discrepancy arises because IF averages out the citation distribution, ignoring both the size of the publication set and the highly skewed nature of citation counts.
To address this, the authors propose the Integrated Impact Indicator (I3), which treats impact as an additive quantity rather than a central‑tendency measure. Each article’s citation count is first normalized to a percentile rank within its document type, publication year, and ISI subject category. Percentiles (0–100) or coarser six‑class ranks (top‑1 %, top‑5 %, top‑10 %, top‑25 %, top‑50 %, bottom‑50 %) are then assigned weights (e.g., 6 for top‑1 % down to 1 for bottom‑50 %). The I3 for a set of documents is simply the sum of these weighted percentiles. Because sums are additive, I3 can be decomposed by journal, institution, country, or any other aggregation level, and expressed as a percentage of the total impact of a reference set.
Empirically, the study harvested Web of Science data for two ISI subject categories: LIS (65 journals, 5,737 citable items from 2007‑2008) and Multidisciplinary Sciences (MS; 48 journals, 24,494 items). Only articles, reviews, proceedings papers, and letters were included. For each paper, the percentile rank was calculated using the counting rule that the number of items with lower citation counts determines the percentile; ties receive the higher possible rank (“benefit of the doubt”). The authors computed I3 using both the 100‑percentile (I3‑100PR) and the six‑class scheme (I3‑6PR).
Key findings include:
- I3 captures both the size of a journal’s output and the shape of its citation distribution. In the LIS set, MIS Quarterly’s I3‑100PR is 5,581.4 (2.61 % of the total LIS impact) while JASIST’s is 20,811.3 (9.73 %). Thus, despite a lower IF, JASIST exerts a markedly higher integrated impact.
- The total I3 for the LIS category equals 213,906.2 (100PR) and 10,049 (6PR), showing that the summed impact is far larger than any single journal’s contribution.
- Correlation analyses reveal only a modest relationship between I3 and IF (r≈0.38), confirming that the two metrics assess different dimensions.
- Regression of I3 against the number of citable items shows considerable heterogeneity (R²≈0.38), reflecting the diverse functions of journals within the same subject category (e.g., newsletters vs. research journals).
Statistical procedures employed include SPSS “Compare Means” for means, sums, and standard errors; Pearson and Spearman correlations for metric comparisons; and non‑parametric multiple‑comparison tests (Bonferroni‑adjusted LSD approximating Dunn’s test) to assess whether citation distributions differ significantly across journals.
The authors argue that I3’s additive nature makes it especially suitable for research evaluation and science policy. Because I3 can be expressed as a share of total impact, it allows fair comparison of entities of different sizes (e.g., small institutions that produce a few highly cited papers versus large institutions with many modestly cited papers). This could mitigate the “big‑journal bias” inherent in IF‑based assessments, encouraging diversity, innovation, and more equitable allocation of research funding.
In conclusion, the Integrated Impact Indicator offers a theoretically sound and practically versatile alternative to the Impact Factor. By integrating normalized citation percentiles rather than averaging them, I3 respects both the quantity and quality of scholarly output, enabling transparent, additive, and policy‑relevant impact assessments across journals, institutions, and nations.
Comments & Academic Discussion
Loading comments...
Leave a Comment