The new interface of the Web of Science (of Thomson Reuters) enables users to retrieve sets larger than 100,000 documents in a single search. This makes it possible to compare publication trends for China, the USA, EU-27, and a number of smaller countries. China no longer grew exponentially during the 2000s, but linearly. Contrary to previous predictions on the basis of exponential growth or Scopus data, the cross-over of the lines for China and the USA is postponed to the next decade (after 2020) according to this data. These long extrapolations, however, should be used only as indicators and not as predictions. Along with the dynamics in the publication trends, one also has to take into account the dynamics of the databases used for the measurement.
On March 28, 2011, the BBC-online had a headline that the Royal Society-the UK's national science academy-had issued a report warning that "China (was) 'to overtake US on science' in two years" based on Elsevier's Scopus data (Clarke et al., 2011;Plume, 2011; see Figure 1). In the weeks thereafter, this news led to discussions on the email listing of the US National Science Foundation's "Science of Science Policy" listserver (at scisip@listserv.nsf.gov) about the quality of the prediction based on Scopus data. More recently, that is, in July 2011, Thomson Reuters launched Version 5 of the Web-of-Science (WoS) which allows the user-as in Scopus-to search directly for countries' shares of contributions, whereas in the previous version one had to overcome indirectly the limits of a recall of more than 100,000 publications in each search (Arencibia-Jorge et al., 2009).
Both Scopus and the Science Citation Index now allow for direct access to large numbers in the retrieval. In this communication, the new WoS-version of the Science Citation Index-Expanded (SCIE) is first used to show the long-term trends of a few leading nations in science and also some smaller ones. The ten-year trendlines for the USA, China, and the EU-27 can be compared using confidence intervals (at the 95% level) for the prediction. These results are compared with those of the Royal Society and the latter will be reproduced using the online version of Scopus, but including data for 2009 and 2010. However, the team of Elsevier and the Royal Society, used Scopus including the social sciences and humanities, while these were not included using SCIE for the measurement. After correction for this, the decline of both the EU-27 and the US since 2004 disappears using Scopus data. The significant differences between using the two databases and different assumptions for the measurement raise questions about the reliability of the prediction.
The measurement of national publication outputs has been a methodological issue on the research agenda of scientometrics from the very beginning of the Science Citation Index.
Both Narin (1976) and Small & Garfield (1985) conceptualized this database as a matrix organized along the two dimensions of journals versus countries. The “decline of British Science” in the 1980s (under the Thatcher government), for example, spurred a debate about whether such a decline could perhaps be a scientometric artifact of parameter choices (Anderson et al., 1988;Braun et al., 1989 and1991;Leydesdorff, 1988 and1991;Martin, 1991).
At the time, the main database used for the Science (and Engineering) Indicators of the US National Science Board (since 1982)2 was based on two assumptions made by the contracting firm (at the time, Narin’s Computer Horizons Inc.): (1) internationally coauthored articles were attributed proportionally to the contributing nations (this is also called “fractional counting”) and (2) a fixed journal set was extracted from the Science Citation Index for the purpose of longitudinal comparisons (Narin, 1986). Leydesdorff (1988) argued that both these assumptions had an effect on the measurement of output of nations: the ongoing internationalization of coauthorship patterns decreased the national output ceteris paribus, and authors in advanced nations such as the UK can be expected to publish above average in new journals associated with newly developing fields of science.
The issue led to a debate and eventually a special issue of Scientometrics in 1991 (Braun et al., 1991). Braun et al. (1989) distinguished 28 possible parameter choices. The sensitivity of the measurement for relatively minor decisions at the methodological level questions the role of policy advice based on these trendlines for both nations and units at lower levels of aggregation (Leydesdorff, 1989;1991). How reliable is this data for comparisons among years? One would expect random fluctuations to be averaged out at a high level of aggregation, and thus uncertainty to be reduced. Nowadays, one can additionally ask whether the two major databases (Scopus and the WoS) can provide us with similar results. What may be sources of misspecification and therefore potential misrepresentations in the policy arena (Leydesdorff, 2008)?
The issue of the competion of China as a leading nation in science is particularly salient to the science-policy debate today. How much of the spectacular increase of the Chinese world share of publications during the 1990s and 2000s can be attributed to internationalization which goes to the detriment of national publication outlets (Wagner, 2011)? Zhou & Leydesdorff (2006) conjectured that different from linear growth as witnessed before in the case of internationalization (and Anglification) of the national research outputs (e.g., Scandinavia and the Netherlands during the 1980s; Italy and Spain during the 1990s), a reservoir of Chinese scientists who hitherto had no access to other than national journals was tap
This content is AI-processed based on open access ArXiv data.