A Generalized Publication Bias Model

Reading time: 6 minute
...

📝 Original Info

  • Title: A Generalized Publication Bias Model
  • ArXiv ID: 0808.1588
  • Date: 2008-08-13
  • Authors: ** - Peter H. Schonemann (Department of Psychology, Purdue University) - Jeffrey D. Scargle (Space Science Division, NASA Ames Research Center) **

📝 Abstract

Scargle (2000) has discussed Rosenthal and Rubin's (1978) "fail-safe number" (FSN) method for estimating the number of unpublished studies in meta-analysis. He concluded that this FSN cannot possibly be correct because a central assumption the authors used conflicts with the very definition of publication bias. While this point has been made by others before (Elsahoff, 1978; Darlington, 1980; Thomas, 1985, Iyengar & Greenhouse, 1988), Scargle showed, by way of a simple 2-parameter model, how far off Rosenthal & Rubin' s estimate can be in practice. However, his results relied on the assumption that the decision variable is normally distributed with zero mean. In this case the ratio of unpublished to published papers is large only in a tiny region of the parameter plane. Building on these results, we now show that (1) Replacing densities with probability masses greatly simplifies Scargle's derivations and permits an explicit statement of the relation between the probability alpha of Type I errors and the step-size beta; (2) This result does not require any distribution assumptions; (3) The distinction between 1-sided and 2-sided rejection regions becomes immaterial; (4) This distribution-free approach leads to an immediate generalization to partitions involving more than two intervals, and thus covers more general selection functions.

💡 Deep Analysis

Deep Dive into A Generalized Publication Bias Model.

Scargle (2000) has discussed Rosenthal and Rubin’s (1978) “fail-safe number” (FSN) method for estimating the number of unpublished studies in meta-analysis. He concluded that this FSN cannot possibly be correct because a central assumption the authors used conflicts with the very definition of publication bias. While this point has been made by others before (Elsahoff, 1978; Darlington, 1980; Thomas, 1985, Iyengar & Greenhouse, 1988), Scargle showed, by way of a simple 2-parameter model, how far off Rosenthal & Rubin’ s estimate can be in practice. However, his results relied on the assumption that the decision variable is normally distributed with zero mean. In this case the ratio of unpublished to published papers is large only in a tiny region of the parameter plane. Building on these results, we now show that (1) Replacing densities with probability masses greatly simplifies Scargle’s derivations and permits an explicit statement of the relation between the probability alpha of T

📄 Full Content

A Generalized Publication Bias Model

Peter H. Schonemann1 Jeffrey D. Scargle2

             1Department of Psychology                    2Space Science Division 
             Purdue University                         NASA Ames Research Center 
        West Lafayette IN 47906 USA          Moffett Field CA 94035-100 USA 
              phs@psych.purdue.edu                  jeffrey@cosmic.arc.nasa.gov 

KEY WORDS: publication bias, meta-analysis, file-drawer hypothesis, fail-safe number.

Minor modification of the version published in the Chinese Journal of Psychology, 2008, Vol. 50, 1, 21-29.

Publication Bias and File-drawer Problem 2 Abstract Scargle (2000) has discussed Rosenthal&Rubin’s (1978) “fail-safe number” (FSN) method for estimating the number of unpublished studies in meta-analysis. He concluded that this FSN cannot possibly be correct because a central assumption the authors used conflicts with the very definition of publication bias. While this point has been made by others before (Elsahoff, 1978; Darlington, 1980; Thomas, 1985, Iyengar & Greenhouse, 1988), Scargle showed, by way of a simple 2-parameter model, how far off Rosenthal & Rubin’s estimate can be in practice. However, his results relied on the assumption that the
decision variable is normally distributed with zero mean. In this case the ratio of unpublished to published papers is large only in a tiny region of the parameter plane.
Building on these results, we now show that (1) Replacing densities with probability masses greatly simplifies Scargle’s derivations and permits an explicit statement of the relation between the probability α of Type I errors and the step-size β; (2) This result does not require any distribution assumptions; (3) The distinction between 1-sided and 2-sided rejection regions becomes immaterial; (4) This distribution-free approach leads to an immediate generalization to partitions involving more than two intervals, and thus covers more general selection functions. Publication Bias and File-drawer Problem 3

  1. Introduction: Historical Context In the late 70’s, Rosenthal & Rubin (1978) and Rosenthal (1979) proposed a new method for coping with the nagging “file-drawer problem” of meta-analysis. In essence, meta-analysis is a quantitative method for aggregating statistical results of a number of similar studies on a particular topic into a hopefully more conclusive larger study, relying on methods proposed by Wallis (1942), Fisher (1948), and others. These procedures are statistically sound as long as the necessary assumptions - most importantly the fairness of the samples - are met at least approximately. Problems arise when they are not met, as in meta-analysis, a supposedly more objective method for evaluating accumulated research.
    Mahoney (1977) has defined “confirmatory bias” as “the tendency to emphasize and believe experiences which support one’s views or discredit those which do not” (p. 161).
    This type of bias has been repeatedly verified and seems to be pervasive and quite robust.
    After asking 75 journal reviewers “to referee manuscripts which described identical experimental procedures but which reported positive, negative, mixed or no results”, he not only found poor inter-rater agreement, but also, more to the point here, that “reviewers were strongly biased against manuscripts which reported results contrary to their theoretical perspective” (loc. cit.)
    In an effort to cope with this challenge, Rosenthal & Rubin (1978) proposed a so- called “fail-safe number” (FSN) approach intended to estimate post hoc the number of unpublished papers that languish in file-drawers because they had been rejected as a
    result of confirmatory bias. The implied claim was that a positive effect in the published sample could only be due to publication bias if there were at least this number of Publication Bias and File-drawer Problem 4 unpublished papers. Typically, this FSN turned out to be very large. For example, in the first paper published on this topic in Behavioral and Brain Sciences, (BBS), Rosenthal & Rubin (1978) reported a FSN of 65,122 for 345 published studies. The authors concluded that “It certainly seems unlikely that there are file drawers crammed with the unpublished results of over 65,000 studies of interpersonal expectations” (p. 381). One of the commentators, Elsahoff (1978), presented a simple counterexample in the same issue that suggested the FSN logic had to be flawed, since Rosenthal & Rubin’s FSN appeared to be far too large. One year later, Rosenthal (1979) restated this FSN argument in the more widely read Psychological Bulletin. This paper spawned a veritable avalanche of meta-analytic literature reviews, which continues unabated to this day, and which is predicated on the mistaken belief that Rosenthal & Rubin had banished the confirmatory publicat

…(Full text truncated)…

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut