Fast Rates for Nonstationary Weighted Risk Minimization
Weighted empirical risk minimization is a common approach to prediction under distribution drift. This article studies its out-of-sample prediction error under nonstationarity. We provide a general decomposition of the excess risk into a learning term and an error term associated with distribution drift, and prove oracle inequalities for the learning error under mixing conditions. The learning bound holds uniformly over arbitrary weight classes and accounts for the effective sample size induced by the weight vector, the complexity of the weight and hypothesis classes, and potential data dependence. We illustrate the applicability and sharpness of our results in (auto-) regression problems with linear models, basis approximations, and neural networks, recovering minimax-optimal rates (up to logarithmic factors) when specialized to unweighted and stationary settings.
💡 Research Summary
This paper investigates the generalization performance of weighted empirical risk minimization (ERM) in settings where the data-generating distribution drifts over time and may exhibit dependence. The authors introduce a clean decomposition of the excess risk into two components: a “learning error” that captures estimation and approximation errors relative to the weighted risk minimizer, and a “drift error” that measures how well the chosen weight vector tracks the current distribution. Formally, for a weight vector w with ∑w_t=1, they define the weighted empirical risk R_wⁿ(h)=∑{t=1}ⁿ w_t L(X_t,h) and its population counterpart R_w(h)=∑{t=1}ⁿ w_t E
Comments & Academic Discussion
Loading comments...
Leave a Comment