Bayesian sample size calculations for external validation studies of risk prediction models
Contemporary sample size calculations for external validation of risk prediction models require users to specify fixed values of assumed model performance metrics alongside target precision levels (e.g., 95% CI widths). However, due to the finite samples of previous studies, our knowledge of true model performance in the target population is uncertain, and so choosing fixed values represents an incomplete picture. As well, for net benefit (NB) as a measure of clinical utility, the relevance of conventional precision-based inference is doubtful. In this work, we propose a general Bayesian framework for multi-criteria sample size considerations for prediction models for binary outcomes. For statistical metrics of performance (e.g., discrimination and calibration), we propose sample size rules that target desired expected precision or desired assurance probability that the precision criteria will be satisfied. For NB, we propose rules based on Optimality Assurance (the probability that the planned study correctly identifies the optimal strategy) and Value of Information (VoI) analysis. We showcase these developments in a case study on the validation of a risk prediction model for deterioration of hospitalized COVID-19 patients. Compared to the conventional sample size calculation methods, a Bayesian approach requires explicit quantification of uncertainty around model performance, and thereby enables flexible sample size rules based on expected precision, assurance probabilities, and VoI. In our case study, calculations based on VoI for NB suggest considerably lower sample sizes are needed than when focusing on precision of calibration metrics.
💡 Research Summary
This paper addresses a critical gap in the design of external validation studies for binary‑outcome risk prediction models. Traditional sample‑size calculations, such as those proposed by Riley and colleagues, require investigators to input fixed point estimates of model performance (e.g., c‑statistic, calibration slope, outcome prevalence) and then target a pre‑specified precision (usually a 95 % confidence‑interval width). The authors argue that fixing these performance values ignores the inevitable uncertainty stemming from the limited sample sizes of prior studies, and that for decision‑theoretic measures like Net Benefit (NB) the relevance of precision‑based criteria is questionable.
To overcome these limitations, the authors develop a general Bayesian framework that treats model‑performance parameters as random variables with prior distributions reflecting existing evidence (published estimates, standard errors, expert opinion). For a given candidate sample size N, they repeatedly draw from the prior, simulate a validation dataset conditional on the drawn parameters, compute the performance metrics and their confidence‑interval widths, and thus obtain a prior‑predictive (pre‑posterior) distribution of the quantities of interest. This Monte‑Carlo approach makes it possible to evaluate several distinct sample‑size rules:
- Expected CI Width (ECIW) – the average confidence‑interval width across simulated datasets is required to be below a target τ. The smallest N satisfying E
Comments & Academic Discussion
Loading comments...
Leave a Comment