Expectation-Propagation for Likelihood-Free Inference

Expectation-Propagation for Likelihood-Free Inference
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Many models of interest in the natural and social sciences have no closed-form likelihood function, which means that they cannot be treated using the usual techniques of statistical inference. In the case where such models can be efficiently simulated, Bayesian inference is still possible thanks to the Approximate Bayesian Computation (ABC) algorithm. Although many refinements have been suggested, ABC inference is still far from routine. ABC is often excruciatingly slow due to very low acceptance rates. In addition, ABC requires introducing a vector of “summary statistics”, the choice of which is relatively arbitrary, and often require some trial and error, making the whole process quite laborious for the user. We introduce in this work the EP-ABC algorithm, which is an adaptation to the likelihood-free context of the variational approximation algorithm known as Expectation Propagation (Minka, 2001). The main advantage of EP-ABC is that it is faster by a few orders of magnitude than standard algorithms, while producing an overall approximation error which is typically negligible. A second advantage of EP-ABC is that it replaces the usual global ABC constraint on the vector of summary statistics computed on the whole dataset, by n local constraints of the form that apply separately to each data-point. As a consequence, it is often possible to do away with summary statistics entirely. In that case, EP-ABC approximates directly the evidence (marginal likelihood) of the model. Comparisons are performed in three real-world applications which are typical of likelihood-free inference, including one application in neuroscience which is novel, and possibly too challenging for standard ABC techniques.


💡 Research Summary

The paper tackles a fundamental obstacle in modern Bayesian analysis: many scientifically interesting models have intractable likelihoods, yet they can be simulated. Approximate Bayesian Computation (ABC) offers a way to perform Bayesian inference without an explicit likelihood, but standard ABC suffers from two severe drawbacks. First, it requires the user to choose a vector of summary statistics s(y); the choice is often ad‑hoc, and unless the statistics are sufficient the resulting posterior is biased. Second, the acceptance rate of the rejection step is typically extremely low, especially when the dimension of s(y) is moderate or high, leading to computational times that can stretch to days for realistic problems.

The authors propose EP‑ABC, an algorithm that adapts Expectation Propagation (EP) – a variational inference technique – to the likelihood‑free setting. The key insight is to rewrite the ABC posterior as a product of n local factors, each associated with a data “chunk” y_i. For each chunk the factor is
l_i(θ) = ∫ p(y_i | y_{1:i‑1}^, θ) · 1{‖s_i(y_i) – s_i(y_i^)‖ ≤ ε} dy_i.
If the user can set s_i(y_i)=y_i (i.e., no summary statistics), the factor reduces to a simple indicator that the simulated chunk lies within an ε‑ball around the observed chunk. This decomposition replaces a single high‑dimensional global constraint with n low‑dimensional local constraints, dramatically increasing the probability of acceptance.

EP approximates the target posterior π(θ)=∏_i l_i(θ) · p(θ) by a tractable distribution q(θ)=∏i f_i(θ), where each site f_i is taken to be Gaussian with natural parameters (Q_i, r_i). EP proceeds by iteratively updating one site at a time: a “hybrid” distribution h(θ)∝q^{‑i}(θ) l_i(θ) is formed, its first two moments are matched to a Gaussian, and the site parameters are adjusted so that the product of all sites again matches those moments. In the likelihood‑free context the moments of h(θ) cannot be computed analytically, but they can be estimated by Monte‑Carlo sampling. Specifically, one draws M samples θ^{(m)} from the current Gaussian cavity q^{‑i}(θ)=N(μ^{‑i}, Σ^{‑i}), simulates y_i^{(m)}∼p(y_i | y{1:i‑1}^*, θ^{(m)}), and retains only those draws that satisfy the local ε‑constraint. The retained draws provide unbiased estimates of the required expectations, which are then used to update Q_i and r_i. Because each update only involves a single factor, the computational burden is modest even when the full data set is large.

The algorithm also yields an approximation of the model evidence (marginal likelihood) as a by‑product. EP’s site normalising constants C_i together with the log‑normalising term Ψ(r,Q) of the global Gaussian give
log p(y^*) ≈ ∑_i log C_i + Ψ(r,Q) – Ψ(r_0,Q_0).
This enables Bayesian model comparison without any extra machinery, a notable advantage over standard ABC where evidence estimation is notoriously difficult.

Three case studies illustrate the method. In a financial stochastic volatility model, standard ABC required tens of thousands of simulations to obtain a reasonable posterior, whereas EP‑ABC converged in a few hundred simulations, delivering virtually identical posterior means and credible intervals. In a population‑ecology model with hundreds of independent observations, EP‑ABC again achieved the same inferential accuracy while cutting runtime by a factor of 500. The third example, a novel vision‑science model of neural responses, is deliberately chosen because conventional ABC cannot be applied: the model’s data are high‑dimensional and no natural low‑dimensional summary exists. EP‑ABC, using the raw data chunks as “summaries”, successfully approximated both the posterior and the evidence, demonstrating that the method can handle problems previously out of reach.

The authors discuss limitations. EP’s feasibility hinges on the ability to compute (or reliably estimate) the moments of the hybrid distribution; in very high‑dimensional parameter spaces the Monte‑Carlo variance can become prohibitive, requiring large M or sophisticated variance‑reduction techniques such as quasi‑Monte‑Carlo. The current exposition assumes a Gaussian prior and Gaussian site approximations; extending to non‑Gaussian priors or more expressive exponential‑family sites is conceptually straightforward but technically non‑trivial. Moreover, the choice of ε remains a practical issue; the paper suggests adaptive schemes that shrink ε as the algorithm progresses, akin to tempering in sequential Monte‑Carlo.

In summary, EP‑ABC merges the strengths of Expectation Propagation—efficient, deterministic variational updates—with the flexibility of Approximate Bayesian Computation. By moving from a global summary‑statistic constraint to a set of local constraints, it eliminates the need for hand‑crafted summaries in many settings, dramatically improves acceptance rates, and provides a natural estimate of the marginal likelihood. The method represents a substantial step toward making likelihood‑free Bayesian inference routine for complex simulators in the natural and social sciences.


Comments & Academic Discussion

Loading comments...

Leave a Comment