It is Time to Stop Teaching Frequentism to Non-statisticians
We should cease teaching frequentist statistics to undergraduates and switch to Bayes. Doing so will reduce the amount of confusion and over-certainty rife among users of statistics.
đĄ Research Summary
The paper makes a bold, polemical case that undergraduate statistics education for nonâstatisticians should abandon frequentist methods entirely and adopt a Bayesian framework. It begins by documenting the status quo: most undergraduate curricula still teach hypothesis testing, pâvalues, confidence intervals, and related concepts as the core of statistical inference. The authors argue that this emphasis creates a persistent set of misconceptions among students and practitioners. In particular, they point out that novices routinely interpret a pâvalue as âthe probability that the null hypothesis is true,â treat the conventional 0.05 threshold as a definitive proof of effect, and consequently develop an unwarranted sense of certainty about their results. These misunderstandings, they claim, contribute to the broader reproducibility crisis and to policy decisions that are based on fragile statistical evidence.
The second part of the manuscript outlines why Bayesian inference is better suited to the learning needs of nonâstatisticians. By explicitly combining prior knowledge with observed data through Bayesâ theorem, Bayesian analysis yields a posterior probability that directly answers the question most users care about: âWhat is the probability that this hypothesis is true given the data?â This answer is intuitively meaningful, allows the incorporation of expert opinion or previous studies, and presents uncertainty as a continuous probability distribution rather than a binary âsignificant / not significantâ decision. The authors also emphasize that Bayesian methods naturally align with decisionâtheoretic thinking: expected losses, costs, and benefits can be incorporated into the posterior analysis, making the statistical output immediately actionable.
The paper then proposes a concrete roadmap for curricular transformation. First, textbooks and lecture slides should be rewritten to introduce priors, likelihoods, posteriors, and credible intervals early in the course, supported by interactive simulations that let students see how different priors affect results. Second, rather than discarding frequentist tools outright, a âhybridâ approach is suggested in which traditional tests are replaced by their Bayesian analogues (e.g., Bayesian tâtests, Bayesian ANOVA) and confidence intervals are supplanted by credible intervals. Third, assessment criteria should shift from âpâvalue < 0.05â to measures of posterior predictive accuracy, model comparison metrics such as WAIC or LOOâCV, and the correctness of decisionâmaking based on posterior probabilities.
The authors acknowledge the most common objections to a wholesale Bayesian shift. They concede that the choice of prior can be perceived as subjective and that Bayesian models can be computationally demanding. To address subjectivity, they recommend the use of weakly informative or reference priors together with sensitivity analyses that demonstrate robustness to prior specifications. Regarding computation, they argue that modern advancesâMarkov chain Monte Carlo, Hamiltonian Monte Carlo, variational inference, and cloudâbased highâperformance computingâhave dramatically lowered the barrier to fitting complex Bayesian models even on large data sets.
Illustrative case studies are presented from clinical trials, where Bayesian adaptive designs enable early stopping for efficacy or futility while preserving ethical standards, and from the social sciences, where Bayesian structural equation models capture intricate causal pathways with quantified uncertainty. These examples serve to demonstrate that Bayesian methods are not merely philosophically appealing but also practically advantageous in realâworld research.
In the final section, the paper issues policy recommendations. Universities should revise degree requirements, fund faculty development workshops on Bayesian pedagogy, and encourage the creation of openâsource teaching materials. Journals and funding agencies are urged to accept Bayesian analyses as standard, to require preâregistration of priors where appropriate, and to promote transparent reporting of posterior distributions.
In sum, the authors contend that the âstatistical mindsetâ taught to nonâstatisticians must be reâengineered. While frequentist inference has historical merit, the authors argue that the modern data ecosystemâcharacterized by interdisciplinary collaboration, highâdimensional data, and decisionâcritical applicationsâdemands a Bayesian approach that is more intuitive, more honest about uncertainty, and more directly linked to actionable conclusions. They conclude that replacing frequentist teaching with Bayesian instruction will reduce confusion, curb overâconfidence, and ultimately improve the quality and credibility of statistical practice across the sciences.