Quickest Change Detection of a Markov Process Across a Sensor Array
Recent attention in quickest change detection in the multi-sensor setting has been on the case where the densities of the observations change at the same instant at all the sensors due to the disruption. In this work, a more general scenario is considered where the change propagates across the sensors, and its propagation can be modeled as a Markov process. A centralized, Bayesian version of this problem, with a fusion center that has perfect information about the observations and a priori knowledge of the statistics of the change process, is considered. The problem of minimizing the average detection delay subject to false alarm constraints is formulated as a partially observable Markov decision process (POMDP). Insights into the structure of the optimal stopping rule are presented. In the limiting case of rare disruptions, we show that the structure of the optimal test reduces to thresholding the a posteriori probability of the hypothesis that no change has happened. We establish the asymptotic optimality (in the vanishing false alarm probability regime) of this threshold test under a certain condition on the Kullback-Leibler (K-L) divergence between the post- and the pre-change densities. In the special case of near-instantaneous change propagation across the sensors, this condition reduces to the mild condition that the K-L divergence be positive. Numerical studies show that this low complexity threshold test results in a substantial improvement in performance over naive tests such as a single-sensor test or a test that wrongly assumes that the change propagates instantaneously.
💡 Research Summary
The paper tackles a more realistic version of the quickest change‑detection problem in a multi‑sensor network, where a disruption does not appear simultaneously at all sensors but propagates across the array according to a Markov process. A centralized Bayesian framework is assumed: a fusion center receives all raw observations, knows the pre‑ and post‑change probability density functions (f₀, f₁) for each sensor, and possesses prior knowledge of the transition matrix governing the change’s spread.
The authors formulate the detection task as a constrained optimization: minimize the average detection delay (ADD) subject to a false‑alarm probability (PFA) not exceeding a prescribed level α. By treating the hidden state (which sensors have already been affected) as a partially observable Markov decision process (POMDP), they derive a belief vector πₖ that represents the posterior probability that no sensor has yet experienced the change given observations up to time k. The optimal stopping rule in the full POMDP is generally intractable, but the paper shows that, under the “rare‑disruption” regime (i.e., the prior probability of a change occurring is very small), the optimal policy collapses to a simple threshold test on a single scalar: stop when the posterior probability that the system is still in the pre‑change state falls below a constant γ, otherwise continue sampling.
A key technical contribution is the proof of asymptotic optimality of this threshold rule as α → 0. The proof hinges on a condition involving the Kullback‑Leibler (KL) divergence D(f₁‖f₀) between the post‑change and pre‑change densities and the spectral properties of the transition matrix. When D(f₁‖f₀) > 0—a mild condition satisfied by any non‑identical densities—the detection delay of the threshold test scales as (|log α|)/D_eff, where D_eff is an effective KL divergence that accounts for the propagation dynamics. In the special case where the change propagates almost instantaneously across sensors, D_eff reduces to the ordinary KL divergence, so the condition simplifies to D(f₁‖f₀) > 0.
The authors validate their theory with extensive Monte‑Carlo simulations. They consider a five‑sensor array with various propagation probabilities (p = 0.2, 0.5, 0.8) and different pairs of (f₀, f₁). The proposed threshold test is compared against (i) a naïve single‑sensor CUSUM detector that ignores the network, and (ii) a centralized detector that incorrectly assumes simultaneous change at all sensors. Results show that the new test achieves a 30–50 % reduction in average detection delay for the same false‑alarm level, with the advantage becoming more pronounced as the propagation speed decreases (i.e., when the change spreads slowly). Moreover, the computational complexity of the test is linear in the number of sensors, making it suitable for real‑time implementation.
In conclusion, the paper extends quickest change detection theory to accommodate Markovian change propagation, provides a rigorous POMDP‑based analysis, and delivers a low‑complexity, asymptotically optimal detection rule. The work opens avenues for more sophisticated network‑wide monitoring applications such as intrusion detection, environmental surveillance, and fault diagnosis in industrial processes, where disruptions often travel through the system rather than appearing everywhere at once.
Comments & Academic Discussion
Loading comments...
Leave a Comment