Exact Multiple Change-Point Detection Via Smallest Valid Partitioning

Exact Multiple Change-Point Detection Via Smallest Valid Partitioning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We introduce smallest valid partitioning (SVP), a segmentation method for multiple change-point detection in time-series. SVP relies on a local notion of segment validity: a candidate segment is retained only if it passes a user-chosen validity test (e.g., a single change-point test). From the collection of valid segments, we propose a coherent aggregation procedure that constructs a global segmentation which is the exact solution of an optimization problem. Our main contribution is the use of a lexicographic order for the optimization problem that prioritizes parsimony. We analyze the computational complexity of the resulting procedure, which ranges from linear to cubic time depending on the chosen cost and validity functions, the data regime and the number of detected changes. Finally, we assess the quality of SVP through comparisons with standard optimal partitioning algorithms, showing that SVP yields competitive segmentations while explicitly enforcing segment validity. The flexibility of SVP makes it applicable to a broad class of problems; as an illustration, we demonstrate robust change-point detection by encoding robustness in the validity criterion.


💡 Research Summary

The paper introduces a novel framework for multiple change‑point detection called Smallest Valid Partitioning (SVP). Unlike traditional optimal partitioning (OP) or segment neighborhood (SN) methods that rely solely on a global cost plus a penalty term, SVP incorporates a local “segment validity” constraint. A candidate segment is deemed valid only if it passes a user‑specified single‑change test (e.g., CUSUM, FOcus, Wilcoxon). This test yields a statistic f(y_{a..b}) which must be ≤ γ, where γ controls the strictness of the validity condition.

SVP formulates the segmentation problem as a bi‑objective minimization over the pair (K, Q), where K is the number of segments and Q is the total segment cost Σ C(y_{τ_k..τ_{k+1}}). The optimization uses a lexicographic order: first minimise K (parsimony), then, among solutions with the same K, minimise Q (goodness of fit). The resulting objective can be written as

R_n = min⪯ { (K, Σ_{k=0}^{K-1} C(y_{τ_k..τ_{k+1}})) | ∀k, f(y_{τ_k..τ_{k+1}}) ≤ γ }.

Dynamic programming (DP) solves this efficiently. The DP recursion is

R_t = min⪯{0≤s<t} { R_s + (1, C(y{s..t})) | f(y_{s..t}) ≤ γ }, R_0 = (0,0),

where the “min⪯” operator examines candidates in increasing order of segment count and stops as soon as a feasible K is found, then selects the minimal cost among those. To prune the search space, the authors define γ‑stable and γ‑1‑stable validity functions. A γ‑stable function guarantees that if a segment is invalid, any longer extension remains invalid; γ‑1‑stable ensures that if a segment ending at t is invalid, any earlier start also yields an invalid segment. These properties allow early discarding of candidate start points s, dramatically reducing the number of DP updates.

The paper proves that if the GLR‑based validity test f_OP(y_{a..b}) = max_{a<u<b}{C(y_{a..b}) –


Comments & Academic Discussion

Loading comments...

Leave a Comment