On the relation between likelihood ratios and p-values for testing success probabilities of Bernoulli trials
It is well known that there is no direct one-to-one relation between $p$-values and likelihood ratios or Bayes factors, since their relation crucially involves the sample size $n$. We investigate their (asymptotic) relation in a coin-tossing context where the hypotheses of interest address the success probability of the coin, and where detailed computations are possible. This leads to useful insights in the nature of $p$-values and likelihood ratios. Our results imply, for instance, that under mild conditions, a $p$-value of 0.05 cannot correspond to a likelihood ratio larger than 7.5, for any hypothesis versus a null hypothesis that the success probability has a specific value. We also show it is unlikely one can obtain a large likelihood ratio by tossing a fair coin until the number of heads deviates from the mean by several standard deviations.
💡 Research Summary
The paper investigates the quantitative relationship between p‑values and likelihood ratios (or Bayes factors) in the context of testing the success probability of Bernoulli trials, using the simple and analytically tractable setting of coin tossing. The authors begin by recalling that p‑values are probabilities of observing data as extreme as, or more extreme than, what was actually observed under a null hypothesis, and therefore they do not directly measure the strength of evidence. In contrast, a likelihood ratio compares the probability of the observed data under two competing hypotheses and thus provides a relative measure of evidential support.
Uniform versus fair coin (Section 2.1).
The first comparison is between H₁: the success probability θ is uniformly distributed on (0, 1) and H₂: θ = ½ (a fair coin). After n tosses with k heads, the likelihood ratio is
LR =
Comments & Academic Discussion
Loading comments...
Leave a Comment