Robust, partially alive particle Metropolis-Hastings via the Frankenfilter
When a hidden Markov model permits the conditional likelihood of an observation given the hidden process to be zero, all particle simulations from one observation time to the next could produce zeros. If so, the filtering distribution cannot be estimated and the estimated parameter likelihood is zero. The alive particle filter addresses this by simulating a random number of particles for each inter-observation interval, stopping after a target number of non-zero conditional likelihoods. For outlying observations or poor parameter values, a non-zero result can be extremely unlikely, and computational costs prohibitive. We introduce the Frankenfilter, a principled, partially alive particle filter that targets a user-defined amount of success whilst fixing lower and upper bounds on the number of simulations. The Frankenfilter produces unbiased estimators of the likelihood, suitable for pseudo-marginal Metropolis–Hastings (PMMH). We demonstrate that PMMH with the Frankenfilter is more robust to outliers and mis-specified initial parameter values than PMMH using standard particle filters, and is typically at least 2-3 times more efficient. We also provide advice for choosing the amount of success. In the case of n exact observations, this is particularly simple: target n successes.
💡 Research Summary
The paper addresses a fundamental failure mode of particle filters when applied to hidden Markov models (HMMs) whose observation likelihoods can be exactly zero for certain state–observation mismatches. In such cases, a conventional particle filter that propagates a fixed number of particles from one observation time to the next may produce only zero‑weight particles, causing the filter to “die out” and yielding a zero estimate of the data likelihood. This zero estimate destroys the pseudo‑marginal Metropolis–Hastings (PMMH) algorithm, which relies on an unbiased likelihood estimator.
Existing “alive particle filters” mitigate the problem by repeatedly simulating particles until a pre‑specified number of successes (non‑zero weights) is obtained. However, when the success probability is extremely low—e.g., due to outlying observations or poorly chosen parameters—the required number of simulations can become astronomically large. A common practical fix is to impose a hard upper bound on the number of simulations; if the bound is reached, the algorithm returns a likelihood of zero. This truncation introduces bias, making the estimator unsuitable for PMMH.
The authors propose the “Frankenfilter,” a partially alive particle filter that retains the robustness of the alive filter while guaranteeing an unbiased likelihood estimator and bounding computational effort. The key innovations are:
- Dual bounds on simulation effort – a minimum (m^{-}) and a maximum (m^{+}) on the number of particle simulations per observation interval.
- Generalised notion of success – instead of binary success/failure, each simulation carries a non‑negative weight (w_j) (e.g., an importance‑weighted likelihood) and an associated “success amount” (s_j) (which may equal (w_j) or be another function of the conditional likelihood).
- Success‑threshold stopping rule – simulations continue until the cumulative success amount reaches a user‑defined threshold (s) or the maximum number of simulations (m^{+}) is hit. If the threshold is met, the estimator uses the average of the collected weights; if the bound is hit without sufficient success, the estimator returns zero.
The paper first presents the simplest version (Algorithm 2) for completely observed discrete‑state jump processes, where success is defined by exact state matching. It then extends to arbitrary non‑binary weights and success measures (Algorithm 3), proving unbiasedness via exchangeability arguments and the tower property of expectations. Finally, Algorithm 5 handles partial or noisy observations by sampling entire state trajectories between observation times, employing ancestor‑sampling for resampling, and applying the same bounded‑success framework.
A central theoretical result is that the Frankenfilter’s likelihood estimator (\hat{p}(y_{1:T})) satisfies (\mathbb{E}
Comments & Academic Discussion
Loading comments...
Leave a Comment