Self-Normalized Concentration Inequalities of Marginal Mean with Sample Variance Only

Self-Normalized Concentration Inequalities of Marginal Mean with Sample Variance Only
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

(This is the third version of a working paper.) We develop a family of self-normalized concentration inequalities for marginal mean under martingale-difference structure and $ϕ/\tildeϕ$-mixing conditions, where the latter includes many processes that are not strongly mixing. The variance term is fully data-observable: naive sample variance in the martingale case and an empirical block long-run variance under mixing conditions. Thus, no predictable variance proxy is required. No specific assumption on the decay of the mixing coefficients (e.g. summability) is needed for the validity. The constants are explicit and the bounds are ready to use.


💡 Research Summary

The paper develops a family of self‑normalized concentration inequalities that rely solely on observable sample variance, eliminating the need for a predictable variance proxy. The authors treat two broad settings: (i) martingale‑difference sequences (MDS) with bounded increments, and (ii) processes satisfying ϕ‑mixing or its relaxed variant ˜ϕ‑mixing, which include many time‑series models that are not strongly mixing.

In the MDS case, they assume |Z_i| ≤ b and define the usual martingale sum M_n = Σ_{i=1}^n Z_i, the predictable quadratic variation ⟨M⟩_n = Σ E


Comments & Academic Discussion

Loading comments...

Leave a Comment