Covariance scanning for adaptively optimal change point detection in high-dimensional linear models

Covariance scanning for adaptively optimal change point detection in high-dimensional linear models
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper investigates the detection and estimation of a single change in high-dimensional linear models. We derive minimax lower bounds for the detection boundary and the estimation rate, which uncover a phase transition governed by the sparsity of the covariance-weighted differential parameter. This form of “inherent sparsity” captures a delicate interplay between the covariance structure of the regressors and the change in regression coefficients on the detectability of a change point. Complementing the lower bounds, we introduce two covariance scanning-based methods, McScan and QcSan, which achieve minimax optimal performance (up to possible logarithmic factors) in the sparse and the dense regimes, respectively. In particular, QcScan is the first method shown to achieve consistency in the dense regime and further, we devise a combined procedure which is adaptively minimax optimal across sparse and dense regimes without the knowledge of the sparsity. Computationally, covariance scanning-based methods avoid costly computation of Lasso-type estimators and attain worst-case computation complexity that is linear in the dimension and sample size. Additionally, we consider the post-detection estimation of the differential parameter and the refinement of the change point estimator. Simulation studies support the theoretical findings and demonstrate the computational and statistical efficiency of the proposed covariance scanning methods.


💡 Research Summary

This paper addresses the fundamental problem of detecting and localising a single change‑point in high‑dimensional linear regression models. The authors consider the classic “at‑most‑one‑change” (AMOC) setting where observations ((Y_t, x_t)) follow
(Y_t = x_t^\top \beta_0 + \varepsilon_t) for (t\le \theta) and
(Y_t = x_t^\top \beta_1 + \varepsilon_t) for (t>\theta),
with covariates (x_t\sim N_p(0,\Sigma)) and noise (\varepsilon_t\sim N(0,\sigma^2)). The change‑point is characterised by the differential parameter (\delta=\beta_1-\beta_0), the minimal segment length (\Delta=\min(\theta,n-\theta)), and the overall signal strength (\Psi=\max{|\Sigma^{1/2}\beta_0|_2,|\Sigma^{1/2}\beta_1|_2,\sigma}).

1. A new notion of sparsity – “inherent sparsity”

Traditional high‑dimensional change‑point literature measures sparsity by the ℓ₀‑norm of (\delta) (i.e., the number of non‑zero regression coefficients). This paper argues that, when the covariates are correlated, the difficulty of detection is governed not by (|\delta|_0) but by the number of non‑zero entries in the covariance‑weighted vector (\Sigma^{1/2}\delta). They denote this quantity by
(s = |\Sigma^{1/2}\delta|_0) and call it the inherent sparsity.

Using information‑theoretic arguments, the authors derive a minimax lower bound for the detection boundary: \


Comments & Academic Discussion

Loading comments...

Leave a Comment