Rejoinder: Citation Statistics

Reading time: 5 minute
...

📝 Original Info

  • Title: Rejoinder: Citation Statistics
  • ArXiv ID: 0910.3548
  • Date: 2009-10-20
  • Authors: - Robert Adler (Technion – 전기·산업공학부) - John Ewing (Math for America) - Peter Taylor (University of Melbourne – 수학·통계학부)

📝 Abstract

Rejoinder to "Citation Statistics" [arXiv:0910.3529]

💡 Deep Analysis

📄 Full Content

arXiv:0910.3548v1 [stat.ME] 19 Oct 2009 Statistical Science 2009, Vol. 24, No. 1, 27–28 DOI: 10.1214/09-STS285REJ Main article DOI: 10.1214/09-STS285 c ⃝Institute of Mathematical Statistics, 2009 Rejoinder: Citation Statistics Robert Adler , John Ewing and Peter Taylor We would like to thank the discussants for reading our report and for their insightful and constructive comments. To start our brief response, we would like to quote Bernard Silverman’s phrase “reducing an assessment of an individual to a single number is both morally and professionally repugnant.” Bernard puts it strongly, but his underlying point, with which we strongly agree, is that “research quality” is not some- thing that ought to be regarded as well-ordered. We note the general support for the case that any analysis should be carried out in the context of a properly-defined model. Peter Hall calls for statisti- cians to undertake a study of “the nature of citation data, the information they contain and methods for analysing them if one must.” Among the three of us, there are varying levels of enthusiasm for advo- cating such a project. A possible downside is the danger that such a study will add to the burgeoning number of proposals for carrying out citation anal- ysis in a “better” way, and none of us have much enthusiasm for this. On the plus side, such a study would enable the mathematical sciences community to comment more authoritatively on citation statis- tics and the quantitative ranking measures that are derived from them. Given that the scientometric in- dustry shows every sign of growing, it can be argued that it is the responsibility of the mathematical sci- ences, and particularly of statisticians, to develop this capability. Robert Adler, Faculty of Electrical Engineering, Faculty of Industrial Engineering and Management, Technion, Haifa, Israel, 32000 e-mail: robert@ieadler.technion.ac.il. John Ewing, President, Math for America 800 Third Ave, 31st fl, New York, New York 10022, USA e-mail: ewing@mathforamerica.org. Peter Taylor, Department of Mathematics and Statistics, University of Melbourne, Vic 3010, Australia e-mail: p.taylor@ms.unimelb.edu.au. This is an electronic reprint of the original article published by the Institute of Mathematical Statistics in Statistical Science, 2009, Vol. 24, No. 1, 27–28. This reprint differs from the original in pagination and typographic detail. David Spiegelhalter and Harvey Goldstein pointed out that there is a lack of independence between in- dividual authors’ citation records due to issues of co- authorship. The effects of this lack of independence seem to be very poorly understood, and nothing in the literature that we reviewed sheds any light on them. In our report, we spent some time discussing the meaning of citations. Sune Lehmann, Benny Lautrup and Andrew Jackson took this point further in their discussion of the fact that there needs to be agree- ment on the basic meaning of a researcher’s citation distribution, which is something that goes beyond merely knowing what citations mean, which itself is not clear. Their example involving researchers A and B makes this point clearly. We would like to emphasise three final points that have more to do with human behavior than statis- tics, and which were not emphasised in the report itself. The first is related to Bernard Silverman’s point that any measurement or ranking system will drive researcher behavior via natural feedback mech- anisms. Traditionally, the mechanisms adopted in academia have been qualitative rather than quan- titative. Peer review has been at the core of the system. When carefully done, peer review not only provides accurate and professional assessments of an individual’s contributions, but it also provides a bal- anced and educated interpretation of quantitative information such as prizes and citation data. Mov- ing to a system based purely on quantitative citation metrics will deliver feedback more frequently, more unequivocally, and in a different way. It is not at all clear that “good research” (and we realise how loaded this term is) will be encouraged by such a system. Our strong opinion is that this feedback as- pect is very important. Related to this issue is another of particular con- cern. In general, it is not all that easy to fool one’s peers, but it takes little imagination to see how, by adopting citation policies that are different from the norm in a particular discipline or sub-discipline, a small group of individuals could easily fool an au- tomated assessment system built on citation data. Assessment is important to all of us, as individuals, 1 2 R. ADLER, J. EWING AND P. TAYLOR as institutions, and as representatives of disciplines. Adopting a system, for short term gains, that is so easily open to abuse is a risk to research standards in the long term. Our final point, which has been amplified by our experiences since the report was first released, is that almost everyone is affected by conflicts of

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut