The Journal Impact Factor Should Not Be Discarded

Reading time: 5 minute
...

📝 Original Info

  • Title: The Journal Impact Factor Should Not Be Discarded
  • ArXiv ID: 1612.04075
  • Date: 2016-12-14
  • Authors: Lutz Bornmann and Alexander I. Pudovkin

📝 Abstract

The Journal Impact Factor (JIF) has been heavily criticized over decades. This opinion piece argues that the JIF should not be demonized. It still can be employed for research evaluation purposes by carefully considering the context and academic environment.

💡 Deep Analysis

Deep Dive into The Journal Impact Factor Should Not Be Discarded.

The Journal Impact Factor (JIF) has been heavily criticized over decades. This opinion piece argues that the JIF should not be demonized. It still can be employed for research evaluation purposes by carefully considering the context and academic environment.

📄 Full Content

Accpeted for publication in the Journal of Korean Medical Science (JKMS)

Journal editors and experts in scientometrics are increasingly concerned with the reliability of the Journal Impact Factor (JIF, Clarivate Analytics, formerly the IP & Science business of Thomson Reuters) as a tool for assessing the influence of scholarly journals. A paper byLarivière et al. (1), which was reposited on bioarXiv portal and commented on in Nature (2), reminded all stakeholders of science communication that the citability of most papers in an indexed journal deviates significantly from its JIF. These authors recommend to display journal citation distribution instead of the JIF, and the proposal is widely discussed on social networking platforms (3,4).

The overall impression is that the discussion over the JIF is endless. The JIF along with the h-index is the simplest and most studied indicator in scientometrics (5,6). However, the commentary in Nature

(2) and subsequent debates over the citation distribution revived interest of the scientific community toward empirical analyses of the JIF and its uses and misuses in research evaluation.

After all the endless discussions, research evaluators should have realized that the JIF should not be used to measure the impact of single papers. But there are still some experts, who argue that the use of the JIFs at the level of single papers cannot be simply distinguished from its use at the journal level (4). In some circumstances, the JIFs may help authors and readers to pick, read, and cite certain papers. Papers from high-impact journals are more likely to be picked and cited than similar ones from low-impact periodicals.

The JIF should not be demonized. It still can be employed for research evaluation purposes by carefully considering the context and academic environment. Elsevierprovider of the Scopus databaserates the JIF as so important that the company introduced the near-doppelgänger CiteScore, recently (see https://journalmetrics.scopus.com/) . The JIF measures the average impact of papers, which are published in a journal, with a citation window of only one year. The JIFs are calculated and published annually in the Journal Citation Reports (JCR, Clarivate Analytics). Papers counted in the denominator of the JIF formula are published within 2 years prior to this citation metric calculation. In contrast to the JIF, the new CiteScore metric considers the papers from 3 years (instead of 2 years).

As such, the JIF (and also the CiteScore) covers rather short term of interest toward papers (i.e., interest at the research front) and overlooks long-term implications of publication activity (the socalled sticky knowledge) (7). Focus on the short-term attention of the field-specific community makes Accpeted for publication in the Journal of Korean Medical Science (JKMS) sense since the JIF was initially designed to guide librarians purchase the most used modern periodicals for their libraries. Accordingly, the JIF cannot and should not be employed for evaluating the average impact of papers in a journal in the long and distant run.

The JIF formula aims at calculating average numbers that reveal the central tendency of a journal’s impact. As such, one or few highly-cited papers, which are published within the 2 years, may boost the JIF. That is particularly the case with Nature, Science, and other influential journals (1). The skewed citation distribution implies that the JIF values do not reflect the real impact of most papers published in the index journal. The absolute number of citations received by a single paper is the correct measure of its impact. Currently, the Web of Science and Scopus databases can provide an outlook at citations for evaluating the impact of single papers.

Importantly, the JIF is the best predictor of single papers’ citability (8). Studies examining the predictive value of the JIF along with number of authors and pages proved that notion (9). One can expect more citations to single papers, which are published in higher-impact journals, compared to those in lower-impact ones.

Another important point is the field-dependency of citations contributing to the JIFs. There are differing citation rates across different disciplines and subject categories, regardless of the scientific quality of the papers, and confounded by field-specific authorship rules, publication activity, and referencing patterns (10). Such differences justified the development of field-normalized indicators, which are employed for evaluating individual researchers, research groups, and institutions (11,12). Since the JIF is not a field-normalized indicator, it can only be used for evaluations within a single subject category.

The SCImago Journal Rank SJR indicatora variant of the JIFwere employed for institutional excellence mapping at www.excellencemapping.net (13,14). For institutions worldwide, this site maps the results of 2 indicators. First, the ‘best paper rate’ measures the

…(Full text truncated)…

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut