Intermittent estimation of stationary time series
Let ${X_n}{n=0}^{\infty}$ be a stationary real-valued time series with unknown distribution. Our goal is to estimate the conditional expectation of $X{n+1}$ based on the observations $X_i$, $0\le i\le n$ in a strongly consistent way. Bailey and Ryabko proved that this is not possible even for ergodic binary time series if one estimates at all values of $n$. We propose a very simple algorithm which will make prediction infinitely often at carefully selected stopping times chosen by our rule. We show that under certain conditions our procedure is strongly (pointwise) consistent, and $L_2$ consistent without any condition. An upper bound on the growth of the stopping times is also presented in this paper.
💡 Research Summary
The paper tackles the fundamental limitation identified by Bailey and Ryabko (2005), namely that universal, strongly consistent prediction of a stationary time series at every time index is impossible even for binary ergodic processes. Instead of attempting to predict at every step, the authors introduce an “intermittent” prediction framework: predictions are made only at a sequence of random stopping times ({\tau_k}_{k\ge1}) that are determined adaptively from the observed data.
Problem formulation.
Given a stationary real‑valued process ({X_n}_{n\ge0}) with unknown distribution, the goal is to estimate the conditional expectation (E
Comments & Academic Discussion
Loading comments...
Leave a Comment