Safety of particle filters: Some results on the time evolution of particle filter estimates
Particle filters (PFs) form a class of Monte Carlo algorithms that propagate over time a set of $N\geq 1$ particles which can be used to estimate, in an online fashion, the sequence of filtering distributions $(\hatη_t){t\geq 1}$ defined by a state-space model. Despite the popularity of PFs, the study of the time evolution of their estimates has received barely any attention in the literature. Denoting by $(\hatη_t^N){t\geq 1}$ the PF estimate of $(\hatη_t){t\geq 1}$ and letting $κ\in (0,1/2)$, in this work we first show that for any number of particles $N$ it holds that, with probability one, we have $|\hatη_t^N- \hatη_t|\geq κ$ for infinitely many time instants $t\geq 1$, with $|\cdot|$ the Kolmogorov distance between probability distributions. Considering a simple filtering problem we then provide reassuring results concerning the ability of PFs to estimate jointly a finite set ${\hatη_t}{t=1}^T$ of filtering distributions by studying the probability $\mathbb{P}(\sup_{t\in{1,\dots,T}}|\hatη_t^{N}-\hatη_t|\geq κ)$. Finally, on the same toy filtering problem, we prove that sequential quasi-Monte Carlo, a randomized quasi-Monte Carlo version of PF algorithms, offers greater safety guarantees than PFs in the sense that, for this algorithm, it holds that $\lim_{N\rightarrow\infty}\sup_{t\geq 1}|\hatη_t^N-\hatη_t|=0$ with probability one.
💡 Research Summary
The paper investigates the long‑term reliability of particle filters (PFs) in sequential state‑space estimation, focusing on how the approximation error evolves over time. Using the simplest possible setting—a one‑dimensional linear Gaussian state‑space model with all observations fixed at zero—the authors first prove that the true filtering distributions converge exponentially fast to a stationary Gaussian law. They then analyze the bootstrap PF with multinomial resampling, where at each time step independent uniform random numbers drive resampling and mutation. By measuring error with the Kolmogorov distance, they establish a striking “almost‑sure” negative result: for any threshold κ∈(0,½), the event ‖η̂ₜᴺ−η̂ₜ‖≥κ occurs infinitely often with probability one, regardless of the number of particles N. The proof relies on a Borel‑Cantelli argument showing that the empirical particle measure will assign zero mass to a fixed interval infinitely many times, which forces the distance to exceed κ. This demonstrates that PFs inevitably suffer large, unavoidable errors in the long run, contradicting the common belief that PF accuracy does not deteriorate with time.
Recognizing that many applications only require accurate estimates over a finite horizon, the authors derive a quantitative bound on the required particle count. For a given horizon T, error tolerance κ, and confidence level q, they prove that any N ≥ C·v_κ·log(T/q) guarantees P(sup_{t≤T}‖η̂ₜᴺ−η̂ₜ‖≥κ) ≤ q, where C is a universal constant and v_κ depends only on κ. This bound improves upon earlier results (e.g., Del Moral 2013) by providing a sharper dependence on T and by being applicable to the toy model considered. Conversely, they show that if N is fixed, the probability of exceeding κ converges to one as T→∞, confirming the impossibility of uniform long‑term guarantees for standard PFs.
To overcome this limitation, the paper introduces sequential quasi‑Monte Carlo (SQMC), which replaces the independent uniform draws used in resampling with points from a scrambled (t,s)‑sequence. Such sequences have low star‑discrepancy of order O(N⁻¹·logⁿ N) almost surely, thereby reducing the stochastic variability introduced at each step. The authors prove that, for the same linear Gaussian model, SQMC satisfies lim_{N→∞} sup_{t≥1}‖η̃ₜᴺ−η̂ₜ‖ = 0 with probability one. In other words, SQMC provides an almost‑sure uniform convergence guarantee across all time steps, a property that standard PFs lack.
The paper’s contributions are threefold: (1) a rigorous demonstration that PFs cannot guarantee uniformly small errors over an infinite horizon; (2) a finite‑horizon error bound that explicitly relates particle number, horizon length, and confidence; and (3) a proof that SQMC restores uniform almost‑sure convergence, offering a safer alternative for safety‑critical applications such as autonomous driving, robotics, and ballistic tracking. The authors conclude by suggesting extensions to higher‑dimensional, non‑linear, and non‑Gaussian models, and by emphasizing the practical need to adopt de‑randomized or low‑discrepancy techniques when long‑term reliability is essential.
Comments & Academic Discussion
Loading comments...
Leave a Comment