Asymptotic Theory of Iterated Empirical Risk Minimization, with Applications to Active Learning
We study a class of iterated empirical risk minimization (ERM) procedures in which two successive ERMs are performed on the same dataset, and the predictions of the first estimator enter as an argument in the loss function of the second. This setting, which arises naturally in active learning and reweighting schemes, introduces intricate statistical dependencies across samples and fundamentally distinguishes the problem from classical single-stage ERM analyses. For linear models trained with a broad class of convex losses on Gaussian mixture data, we derive a sharp asymptotic characterization of the test error in the high-dimensional regime where the sample size and ambient dimension scale proportionally. Our results provide explicit, fully asymptotic predictions for the performance of the second-stage estimator despite the reuse of data and the presence of prediction-dependent losses. We apply this theory to revisit a well-studied pool-based active learning problem, removing oracle and sample-splitting assumptions made in prior work. We uncover a fundamental tradeoff in how the labeling budget should be allocated across stages, and demonstrate a double-descent behavior of the test error driven purely by data selection, rather than model size or sample count.
💡 Research Summary
The paper investigates a class of iterated empirical risk minimization (ERM) procedures in which two successive ERM problems are solved on the same data set, with the predictions of the first estimator entering as an argument in the loss of the second. This setting naturally appears in active learning, re‑weighting schemes, and multi‑step optimization algorithms, but it has not been rigorously analyzed in the high‑dimensional regime where the number of samples n and the ambient dimension d grow proportionally.
The authors focus on linear models trained with a broad family of convex, four‑times differentiable loss functions, and on data generated from a Gaussian mixture model with a finite number of classes. They assume that the inner products ⟨β,μ_c⟩, the class‑wise covariances ⟨μ_c,μ_{c’}⟩, and the norm ‖β‖ are fixed as d→∞, while n/d→α∈(0,∞).
The main technical contribution is a sharp, fully asymptotic characterization of any test metric of the form
E_gen = E_{x,c,ε}
Comments & Academic Discussion
Loading comments...
Leave a Comment