Optimal Hold-Out Size in Cross-Validation

Optimal Hold-Out Size in Cross-Validation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Cross-validation (CV) is routinely used across the sciences to select models and tune parameters, and the resulting choices are often interpreted as substantive scientific conclusions (e.g., which variables, mechanisms, or risk factors are ``supported by the data’’). A key part of the CV procedure – the hold-out size, or equivalently the fold count $K$ – is typically set by convention (e.g., 80/20, $K=5$) rather than by a principled criterion. Central to the issue is the tradeoff between training and testing: increasing the training sample size improves model accuracy, while sacrificing certainty around the accuracy itself. We formalize the tradeoff by targeting predictive performance and explicitly penalizing evaluation uncertainty, which cannot be identified from the data without additional assumptions. We derive finite-sample expressions of this evaluation uncertainty under symmetric errors and general upper bounds under broader error conditions, yielding a transparent utility-based rule for selecting the hold-out size as a function of an irreducible-noise parameter. Empirical analyses with linear regression and random forests across multiple domains, and a high-dimensional genomics application, show that (i) the choice of $K$ is dependent on the data and model. (ii) the optimal $K$ varies based on the assumption on the irreducible error, and (iii) the implied inferential conclusions can change materially as the irreducible error, and thus $K$, varies. The resulting framework replaces a one-size-fits-all convention with a context-specific, assumption-explicit choice of $K$, enabling more reliable model comparisons and downstream scientific inference.


💡 Research Summary

The paper tackles a surprisingly under‑examined aspect of cross‑validation (CV): the choice of hold‑out size, equivalently the number of folds K. While practitioners routinely adopt conventions such as an 80/20 split, K = 5 or K = 10, the authors argue that this decision directly influences model accuracy, the uncertainty of performance estimates, and ultimately scientific conclusions drawn from the analysis.

The authors formalize the trade‑off between training‑set size (which reduces model bias) and test‑set size (which reduces the variance of the performance estimate). They adopt a decision‑theoretic perspective, targeting predictive performance net of irreducible noise σ² and explicitly penalizing evaluation uncertainty, which cannot be identified from the data without additional assumptions (Bengio & Grandvalet, 2004).

Two error regimes are considered. For symmetric error distributions (e.g., Gaussian), they derive an exact finite‑sample expression for the variance of the hold‑out loss:

\


Comments & Academic Discussion

Loading comments...

Leave a Comment