The Kalman Like Particle Filter : Optimal Estimation With Quantized Innovations/Measurements
We study the problem of optimal estimation and control of linear systems using quantized measurements, with a focus on applications over sensor networks. We show that the state conditioned on a causal quantization of the measurements can be expressed as the sum of a Gaussian random vector and a certain truncated Gaussian vector. This structure bears close resemblance to the full information Kalman filter and so allows us to effectively combine the Kalman structure with a particle filter to recursively compute the state estimate. We call the resulting filter the Kalman like particle filter (KLPF) and observe that it delivers close to optimal performance using far fewer particles than that of a particle filter directly applied to the original problem. We show that the conditional state density follows a, so called, generalized closed skew-normal (GCSN) distribution. We further show that for such systems the classical separation property between control and estimation holds and that the certainty equivalent control law is LQG optimal.
💡 Research Summary
The paper tackles the challenging problem of optimal state estimation and control for linear dynamical systems when the available measurements are quantized—a situation common in sensor‑network and IoT applications where communication bandwidth is limited. The authors first formalize “causal quantization,” meaning that at any time t the estimator has access to all quantized measurements up to that instant. They then prove a fundamental structural result: the conditional state distribution given the quantized measurement history can be decomposed into the sum of two independent random vectors. The first component is a Gaussian vector whose mean and covariance follow exactly the standard Kalman‑filter prediction‑update equations. The second component is a truncated Gaussian whose truncation limits are determined by the quantization intervals that the measurements fall into. This truncated Gaussian belongs to a broader family called the Generalized Closed Skew‑Normal (GCSN) distribution, which generalizes normal, skew‑normal, and many mixture models while still admitting closed‑form expressions for its moments and density.
Exploiting this decomposition, the authors propose the Kalman‑like Particle Filter (KLPF). The algorithm proceeds as follows: (1) a conventional Kalman prediction step provides a Gaussian prior (μ⁻, Σ⁻); (2) the quantization interval of the new measurement is identified, and the parameters of the associated truncated Gaussian are updated using GCSN formulas; (3) a modest number of particles are drawn only from the truncated‑Gaussian part, each weighted by the product of the Kalman prior density and the truncated‑Gaussian likelihood; (4) the final state estimate is the weighted sum of the Kalman mean and the particle mean, and the covariance is the sum of the two parts’ covariances. Because the bulk of the distribution is already captured by the Kalman Gaussian, the particle component needs far fewer samples than a conventional particle filter that would have to approximate the whole posterior. Numerical experiments confirm that KLPF achieves near‑optimal mean‑square error with an order‑of‑magnitude reduction in particle count, across 1‑bit, 2‑bit, and higher‑resolution quantizers, different system dimensions, and even under non‑Gaussian process noise.
On the control side, the paper shows that despite the posterior being GCSN rather than Gaussian, the certainty‑equivalence principle still holds. The optimal LQG control law remains a linear feedback of the conditional state mean, uₖ = –L · E
Comments & Academic Discussion
Loading comments...
Leave a Comment