Model selection for weakly dependent time series forecasting
Observing a stationary time series, we propose a two-step procedure for the prediction of the next value of the time series. The first step follows machine learning theory paradigm and consists in determining a set of possible predictors as randomized estimators in (possibly numerous) different predictive models. The second step follows the model selection paradigm and consists in choosing one predictor with good properties among all the predictors of the first steps. We study our procedure for two different types of bservations: causal Bernoulli shifts and bounded weakly dependent processes. In both cases, we give oracle inequalities: the risk of the chosen predictor is close to the best prediction risk in all predictive models that we consider. We apply our procedure for predictive models such as linear predictors, neural networks predictors and non-parametric autoregressive.
💡 Research Summary
The paper addresses the problem of forecasting a stationary time series when the observations exhibit only weak dependence. It proposes a two‑step procedure that blends ideas from statistical learning theory (randomized estimators) with classical model‑selection techniques.
In the first step a collection of predictive models is specified (linear autoregressions, neural networks, non‑parametric autoregressive kernels, etc.). For each model class ( \mathcal{M}_k ) a prior distribution ( \pi_k ) over its parameter space is introduced, and a Gibbs‑type posterior (or “randomized estimator”) is defined by weighting parameters according to the empirical loss on the training block. Formally, for a temperature parameter ( \lambda>0 ) the posterior is
\
Comments & Academic Discussion
Loading comments...
Leave a Comment