Is there agreement on the prestige of scholarly book publishers in the Humanities? DELPHI over survey results

Reading time: 6 minute
...

📝 Abstract

Despite having an important role supporting assessment processes, criticism towards evaluation systems and the categorizations used are frequent. Considering the acceptance by the scientific community as an essential issue for using rankings or categorizations in research evaluation, the aim of this paper is testing the results of rankings of scholarly book publishers’ prestige, Scholarly Publishers Indicators (SPI hereafter). SPI is a public, survey-based ranking of scholarly publishers’ prestige (among other indicators). The latest version of the ranking (2014) was based on an expert consultation with a large number of respondents. In order to validate and refine the results for Humanities’ fields as proposed by the assessment agencies, a Delphi technique was applied with a panel of randomly selected experts over the initial rankings. The results show an equalizing effect of the technique over the initial rankings as well as a high degree of concordance between its theoretical aim (consensus among experts) and its empirical results (summarized with Gini Index). The resulting categorization is understood as more conclusive and susceptible of being accepted by those under evaluation.

💡 Analysis

Despite having an important role supporting assessment processes, criticism towards evaluation systems and the categorizations used are frequent. Considering the acceptance by the scientific community as an essential issue for using rankings or categorizations in research evaluation, the aim of this paper is testing the results of rankings of scholarly book publishers’ prestige, Scholarly Publishers Indicators (SPI hereafter). SPI is a public, survey-based ranking of scholarly publishers’ prestige (among other indicators). The latest version of the ranking (2014) was based on an expert consultation with a large number of respondents. In order to validate and refine the results for Humanities’ fields as proposed by the assessment agencies, a Delphi technique was applied with a panel of randomly selected experts over the initial rankings. The results show an equalizing effect of the technique over the initial rankings as well as a high degree of concordance between its theoretical aim (consensus among experts) and its empirical results (summarized with Gini Index). The resulting categorization is understood as more conclusive and susceptible of being accepted by those under evaluation.

📄 Content

1

Is there agreement on the prestige of scholarly book publishers in the Humanities? DELPHI over survey results

Elea Giménez-Toledo, ILIA Research Group, Institute of Philosophy (IFS), Spanish National Research Council (CSIC). Albasanz Street, 26-28, 28037, Madrid. Email: elea.gimenez@cchs.csic.es.
Jorge Mañana-Rodríguez, ILIA Research Group, Institute of Philosophy (IFS), Spanish National Research Council (CSIC). Albasanz Street, 26-28, 28037, Madrid. Email: jorge.mannana@cchs.csic.es.

Abstract:
Despite having an important role supporting assessment processes, criticism towards evaluation systems and the categorizations used are frequent. Considering the acceptance by the scientific community as an essential issue for using rankings or categorizations in research evaluation, the aim of this paper is testing the results of rankings of scholarly book publishers’ prestige, Scholarly Publishers Indicators (SPI hereafter). SPI is a public, survey-based ranking of scholarly publishers’ prestige (among other indicators). The latest version of the ranking (2014) was based on an expert consultation with a large number of respondents. In order to validate and refine the results for Humanities’ fields as proposed by the assessment agencies, a Delphi technique was applied with a panel of randomly selected experts over the initial rankings. The results show an equalizing effect of the technique over the initial rankings as well as a high degree of concordance between its theoretical aim (consensus among experts) and its empirical results (summarized with Gini Index). The resulting categorization is understood as more conclusive and susceptible of being accepted by those under evaluation. Key words: Scholary book publishers, publishers’ prestige, scientific evaluation, Delphi technique.

Acknowledgements The authors wish to thank ANECA for supporting this study, as well as the specialists in fields of the Humanities who participated in the consultation.

2

Introduction The existence of performance-based assessment and funding systems (Hicks 2012a; Frølich 2011) implies most of the times the need of classifying the communication channels in order to evaluate the research outputs. These kind of tools are usually to ‘inform not to perform’ (Sivertsen Giménez-Toledo et al.) in research evaluation processes. Some evaluation processes such as the Research Excellence Framework in the UK (REF, 2014) have opted not to use categorizations, classifications or rankings for the publications of researchers, using instead peer review-based procedures. Nevertheless, those are tools used in most evaluation systems in order to support informed decision making, as a guide for the evaluation and combined with expert opinion. Categorizations, classifications and rankings also serve as a mean for distinguishing scholarly journals and publishers from other types of publishers and also add value in terms of comparison, contextualizing the position of each journal or publisher.
Despite having an important role supporting assessment processes, criticism towards evaluation systems and the categorizations used are frequent. Among the best well known and numerous are those concerning the coverage and metrics of the Web of Science / Journal Citation Reports (Seglen, 1997, Jacsó, 2012, Bornmann and Marx, 2015 in example). Other information systems are not free from criticism; it is the case of ERIH (Journals under threat, 2009), Scimago Journal Rank (Mañana-Rodríguez, 2004; Jacsó, 2009) or those related to the effects of the quality label for peer reviewed books in Flanders (Borghart, 2013). Controversy is consubstantial to evaluation processes in general and to the tools developed in particular. Nevertheless, not all criticism is equally grounded on evidence: from opinion to empirical demonstration and the publication of manifestos, there is a wide diversity in the form that criticism towards scientific assessment takes.
Tools for scientific assessment should have a sound methodological basis, including transparency as a key element (Weingart, 2005); validation by experts should also be counted among the desirable features of an assessment system intended to be used responsibly and accepted by the scientific community.
After the publication in Spain of the rankings of scholarly book publishers’ prestige, Scholarly Publishers Indicators (SPI hereafter), developed from the opinion of Spanish scholars and their validation by experts (Giménez-Toledo, Tejada-Artigas & Mañana- Rodríguez, 2013), the Spanish scientific assessment agency CNEAI (National Commission for the Evaluation of Research; BOE, 2014) included the rankings as a source of information and reference for the evaluation of scholarly books in Social Sciences and Humanities. At that moment, criteria used by the National Agency for Quality Assessment and Accreditation of Spain (ANECA hereafter, the agency

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut