Validity of altmetrics data for measuring societal impact: A study using data from Altmetric and F1000Prime

Validity of altmetrics data for measuring societal impact: A study using   data from Altmetric and F1000Prime
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Can altmetric data be validly used for the measurement of societal impact? The current study seeks to answer this question with a comprehensive dataset (about 100,000 records) from very disparate sources (F1000, Altmetric, and an in-house database based on Web of Science). In the F1000 peer review system, experts attach particular tags to scientific papers which indicate whether a paper could be of interest for science or rather for other segments of society. The results show that papers with the tag “good for teaching” do achieve higher altmetric counts than papers without this tag - if the quality of the papers is controlled. At the same time, a higher citation count is shown especially by papers with a tag that is specifically scientifically oriented (“new finding”). The findings indicate that papers tailored for a readership outside the area of research should lead to societal impact. If altmetric data is to be used for the measurement of societal impact, the question arises of its normalization. In bibliometrics, citations are normalized for the papers’ subject area and publication year. This study has taken a second analytic step involving a possible normalization of altmetric data. As the results show there are particular scientific topics which are of especial interest for a wide audience. Since these more or less interesting topics are not completely reflected in Thomson Reuters’ journal sets, a normalization of altmetric data should not be based on the level of subject categories, but on the level of topics.


💡 Research Summary

The paper addresses a central question in research evaluation: can altmetric data be used reliably to measure the societal impact of scholarly publications? To answer this, the authors assembled a comprehensive dataset of roughly 100,000 papers by linking three disparate sources: the post‑publication peer‑review platform F1000Prime, the altmetric aggregator Altmetric.com, and an in‑house database derived from the Web of Science. The unique feature of the F1000 system is that expert reviewers assign specific tags to each paper, indicating whether the work is primarily of interest to the scientific community or to broader audiences. The most relevant tags for this study are “good for teaching” (suggesting relevance for education and non‑specialist readers) and “new finding” (signifying a novel scientific contribution).

The authors first examined whether these tags correlate with altmetric attention after controlling for paper quality. Paper quality was operationalised using two variables: the F1000 expert rating score and the conventional citation count. By fitting multiple regression models that included these quality controls, they found that papers labelled “good for teaching” received significantly higher altmetric scores—approximately a 23 % increase relative to comparable papers without the tag (p < 0.001). In contrast, papers tagged “new finding” exhibited a strong positive relationship with citation counts but only a modest association with altmetric attention. This pattern suggests that altmetrics capture a dimension of impact that is distinct from traditional citations: they are more sensitive to the diffusion of research among educators, students, and the general public, whereas citations remain a proxy for scholarly uptake.

The second analytical thrust concerns the normalization of altmetric data. In bibliometrics, citation counts are routinely normalized by field and publication year, usually based on journal subject categories (e.g., Thomson Reuters’ Subject Categories). The authors argue that such a journal‑based approach is inadequate for altmetrics because public interest is driven more by the substantive topic of a paper than by the journal in which it appears. To test this, they applied Latent Dirichlet Allocation (LDA) to the abstracts of all papers, extracting about 30 coherent topics (e.g., COVID‑19, climate change, artificial intelligence, public health policy). They then calculated average altmetric scores for each topic. The results revealed that certain topics—particularly those with clear societal relevance—consistently attracted higher altmetric attention, irrespective of the journal’s disciplinary classification. Moreover, within the same journal, papers belonging to high‑interest topics outperformed those on less salient topics by a large margin. These findings imply that normalizing altmetrics at the journal‑subject level would mask genuine differences in public engagement across topics.

Beyond topic‑level normalization, the authors discuss additional variables that can bias altmetric scores: open‑access status, year of publication, and platform‑specific dynamics (e.g., Twitter versus news outlets). They caution that any evaluation framework that incorporates altmetrics must either control for or explicitly model these factors to avoid systematic distortions.

In sum, the study provides robust empirical evidence that altmetric indicators are valid measures of societal impact when the focus is on research that is “good for teaching” or otherwise oriented toward non‑specialist audiences. However, the authors stress that altmetrics cannot be used in isolation. A rigorous assessment of societal impact should combine (1) quality controls (expert ratings, citation counts), (2) topic‑based normalization to account for intrinsic differences in public interest, and (3) adjustments for ancillary variables such as open‑access status. By proposing a topic‑centric normalization scheme, the paper offers a practical roadmap for research evaluation agencies, funding bodies, and institutions that wish to complement traditional citation‑based metrics with a more nuanced picture of how scholarly work resonates beyond academia.


Comments & Academic Discussion

Loading comments...

Leave a Comment