A Simulated Annealing Approach to Approximate Bayes Computations
Approximate Bayes Computations (ABC) are used for parameter inference when the likelihood function of the model is expensive to evaluate but relatively cheap to sample from. In particle ABC, an ensemble of particles in the product space of model outputs and parameters is propagated in such a way that its output marginal approaches a delta function at the data and its parameter marginal approaches the posterior distribution. Inspired by Simulated Annealing, we present a new class of particle algorithms for ABC, based on a sequence of Metropolis kernels, associated with a decreasing sequence of tolerances w.r.t. the data. Unlike other algorithms, our class of algorithms is not based on importance sampling. Hence, it does not suffer from a loss of effective sample size due to re-sampling. We prove convergence under a condition on the speed at which the tolerance is decreased. Furthermore, we present a scheme that adapts the tolerance and the jump distribution in parameter space according to some mean-fields of the ensemble, which preserves the statistical independence of the particles, in the limit of infinite sample size. This adaptive scheme aims at converging as close as possible to the correct result with as few system updates as possible via minimizing the entropy production in the system. The performance of this new class of algorithms is compared against two other recent algorithms on two toy examples.
💡 Research Summary
The paper introduces a novel class of Approximate Bayesian Computation (ABC) algorithms that draw inspiration from Simulated Annealing (SA) to overcome the limitations of traditional particle‑based ABC methods. In standard ABC‑SMC or ABC‑PMC, particles are weighted by importance and periodically resampled, which inevitably reduces the effective sample size and introduces dependence among particles. The authors propose to dispense with importance weights and resampling altogether, and instead evolve an ensemble of particles using a sequence of Metropolis–Hastings kernels whose acceptance probabilities are governed by a decreasing tolerance ε_k and an associated “temperature” τ_k.
Each particle carries a parameter vector θ and a simulated dataset x generated from the model. At iteration k a new candidate θ′ is drawn from a proposal distribution q(θ′|θ). The model is simulated again to obtain x′, and the distance d(x′,x_obs) to the observed data is computed. If d ≤ ε_k the move is accepted with probability 1; otherwise it is accepted with probability exp