On sequential Monte Carlo, partial rejection control and approximate Bayesian computation

On sequential Monte Carlo, partial rejection control and approximate   Bayesian computation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We present a sequential Monte Carlo sampler variant of the partial rejection control algorithm, and show that this variant can be considered as a sequential Monte Carlo sampler with a modified mutation kernel. We prove that the new sampler can reduce the variance of the incremental importance weights when compared with standard sequential Monte Carlo samplers. We provide a study of theoretical properties of the new algorithm, and make connections with some existing algorithms. Finally, the sampler is adapted for application under the challenging “likelihood free,” approximate Bayesian computation modelling framework, where we demonstrate superior performance over existing likelihood-free samplers.


💡 Research Summary

The paper introduces a novel variant of a sequential Monte Carlo (SMC) sampler that incorporates the Partial Rejection Control (PRC) mechanism. The authors start by observing that standard SMC samplers generate proposals through a mutation kernel, assign importance weights, and periodically resample. In many realistic problems the mutation kernel can produce low‑quality proposals that receive negligible weights, inflating the variance of the incremental importance weights and reducing overall efficiency. PRC, originally proposed as a stand‑alone rejection‑based variance‑reduction technique, discards proposals that fail to meet a pre‑specified quality criterion before they are assigned a weight. By embedding PRC directly into the mutation step of SMC, the authors obtain a “conditional” mutation kernel that only proposes in regions where the acceptance probability exceeds a dynamically adjusted threshold.

The theoretical contribution consists of two main results. First, the authors prove that the PRC‑augmented mutation kernel preserves unbiasedness with respect to the target sequence of distributions. The proof hinges on a correction factor α_t(x_t) that accounts for the probability of rejection at each step; when this factor is included in the weight update, the expectation of the weighted particles remains exactly the target distribution. Second, they demonstrate that the variance of the incremental importance weights under the PRC‑SMC scheme is never larger—and is typically strictly smaller—than that of a standard SMC sampler. This is shown by comparing the second moment of the weight distribution before and after applying the PRC filter, using ℒ₂‑norm arguments and detailed balance conditions. The variance reduction directly translates into higher effective sample sizes (ESS) and less frequent resampling, which in turn improves convergence speed.

Algorithmically, the PRC‑SMC proceeds as follows. At iteration t, each particle x_{t‑1}^i is mutated via a proposal kernel K_t to produce a candidate x_t^{}. A quality metric q(x_t^{})—for example a distance between simulated data and observed data, or an approximate likelihood— is evaluated. If q(x_t^{*}) ≤ ε_t (the current PRC threshold), the candidate is accepted; otherwise it is rejected and a new candidate is drawn. The acceptance probability α_t is estimated on‑the‑fly from the empirical distribution of q, and the importance weight is updated as w_t^i = π_t(x_t^i) /


Comments & Academic Discussion

Loading comments...

Leave a Comment