Does the Web of Science Accurately Represent Chinese Scientific Performance?

Does the Web of Science Accurately Represent Chinese Scientific   Performance?
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The purpose of this study is to compare Web of Science (WoS) with a Chinese bibliometric database in terms of authors and their performance, demonstrate the extent of the overlap between the two groups of Chinese most productive authors in both international and Chinese bibliometric databases, and determine how different disciplines may affect this overlap. The results of this study indicate that Chinese bibliometric databases, or a combination of WoS and Chinese bibliometric databases, should be used to evaluate Chinese research performance except in few disciplines in which Chinese research performance could be assessed using WoS only.


💡 Research Summary

The paper investigates whether the Web of Science (WoS) alone can accurately represent Chinese scientific performance, by comparing it with a major Chinese bibliometric database, the VIP (China Academic Journals Full‑Text Database). The authors retrieved all papers with a Chinese institutional address published between 2008 and 2015, obtaining 1,452,380 records from WoS and 29,940,090 records from VIP. To enable a field‑by‑field comparison, they mapped WoS’s 232 subject categories to VIP’s 35 major fields and 457 sub‑fields, establishing 116 one‑to‑one equivalences that covered 115 disciplines (83 natural sciences, 21 social sciences, 12 arts and humanities), representing roughly two‑thirds of all Chinese publications in both databases.

For each discipline, the top‑100 most productive authors (including ties) were identified in both datasets, yielding 26,969 author records. Because Chinese names can be ambiguous—especially when transliterated into English—the authors applied an automated disambiguation based on full name plus primary affiliation, followed by an extensive six‑month manual validation that resolved 120,953 ambiguous WoS entries and corrected institutional name inconsistencies (e.g., “JINAN‑UNIV” vs. “UNIV‑JINAN”).

Statistical analysis revealed virtually no correlation (r = 0.0131) between the number of publications per discipline in WoS and VIP, indicating that the two databases capture fundamentally different subsets of Chinese research. Overlap of the most productive authors varied dramatically by field: natural‑science disciplines showed an average overlap of about 30 %, whereas social‑science and humanities disciplines exhibited overlaps below 5 %. Institutional affiliation overlap followed a similar pattern, with lower concordance in interdisciplinary and collaborative fields.

The authors conclude that relying solely on WoS leads to a systematic under‑representation of Chinese research output, especially in the social sciences and humanities where Chinese‑language journals dominate. While a few natural‑science fields may be adequately captured by WoS alone, the majority of disciplines require the combined use of both WoS and a Chinese national database such as VIP to obtain a comprehensive and unbiased assessment of Chinese scientific performance. The study underscores the importance of field‑specific bibliometric strategies for policymakers, research evaluators, and scholars interested in accurately measuring China’s contribution to global science.


Comments & Academic Discussion

Loading comments...

Leave a Comment