Approximate Bayesian Computation via Regression Density Estimation

Approximate Bayesian Computation via Regression Density Estimation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Approximate Bayesian computation (ABC) methods, which are applicable when the likelihood is difficult or impossible to calculate, are an active topic of current research. Most current ABC algorithms directly approximate the posterior distribution, but an alternative, less common strategy is to approximate the likelihood function. This has several advantages. First, in some problems, it is easier to approximate the likelihood than to approximate the posterior. Second, an approximation to the likelihood allows reference analyses to be constructed based solely on the likelihood. Third, it is straightforward to perform sensitivity analyses for several different choices of prior once an approximation to the likelihood is constructed, which needs to be done only once. The contribution of the present paper is to consider regression density estimation techniques to approximate the likelihood in the ABC setting. Our likelihood approximations build on recently developed marginal adaptation density estimators by extending them for conditional density estimation. Our approach facilitates reference Bayesian inference, as well as frequentist inference. The method is demonstrated via a challenging problem of inference for stereological extremes, where we perform both frequentist and Bayesian inference.


💡 Research Summary

The paper introduces a novel approach to Approximate Bayesian Computation (ABC) that focuses on directly approximating the likelihood function rather than the posterior distribution. Traditional ABC methods rely on simulating data under proposed parameter values, comparing summary statistics to observed data, and accepting those parameters whose simulated statistics fall within a tolerance band. While effective, this strategy ties the inference to a specific prior: any change in the prior requires a fresh round of simulations, which can be computationally prohibitive.

To overcome this limitation, the authors propose a regression‑density‑estimation framework that builds a non‑parametric model of the joint distribution of parameters θ and summary statistics s, denoted p(θ, s). They employ recently developed marginal‑adaptation density estimators, which first fit accurate marginal densities for each component and then combine them into a coherent multivariate estimate. By estimating the joint density, the conditional likelihood p(s|θ) can be obtained via the identity p(s|θ)=p(θ, s)/p(θ). Crucially, p(θ) in this expression is the empirical distribution of the simulated θ values, not the prior. Consequently, once p(s|θ) has been estimated, it can be reused for any number of priors: the posterior for a new prior π(θ) is simply proportional to π(θ) p(s_obs|θ). This single‑time likelihood approximation enables rapid prior sensitivity analyses, reference Bayesian inference, and even frequentist maximum‑likelihood estimation without additional simulations.

The methodology proceeds as follows. (1) Sample a large set of θ values from a convenient proposal distribution (often the prior or a broad uniform). (2) For each θ, simulate data from the complex model and compute low‑dimensional summary statistics s. (3) Fit the marginal‑adaptation estimator to the collection of (θ, s) pairs, yielding a smooth estimate of the joint density. (4) Derive the conditional density p̂(s|θ) by dividing the joint estimate by the marginal estimate of θ. (5) For any chosen prior, compute the posterior π(θ|s_obs) ∝ π(θ) p̂(s_obs|θ) and draw samples using standard techniques such as importance sampling or MCMC.

The authors demonstrate the approach on a challenging stereological extremes problem, where the goal is to infer the distribution of microscopic particle sizes from observed extreme cross‑sectional measurements. The likelihood for this problem involves intractable integrals over the unobserved three‑dimensional geometry, making it a classic ABC test case. Using their regression‑density estimator, the authors obtain a highly accurate approximation to p(s|θ). They compare the resulting posterior and maximum‑likelihood estimates to those obtained via conventional kernel‑ABC and to ground‑truth values derived from a computationally intensive exact method. The regression‑density‑based approach yields narrower credible intervals, lower mean‑squared error, and dramatically reduced computational cost when exploring multiple priors.

Key contributions of the paper include: (i) a clear demonstration that likelihood approximation, rather than posterior approximation, can be more efficient and flexible in ABC settings; (ii) the adaptation of marginal‑adaptation density estimators to conditional density estimation, providing a scalable solution for high‑dimensional summary statistics; (iii) a unified framework that supports both Bayesian and frequentist inference, allowing the same approximated likelihood to be used for posterior sampling, prior sensitivity analysis, and direct likelihood‑based hypothesis testing; (iv) empirical evidence that the method outperforms standard ABC kernels on a realistic, high‑dimensional problem.

In summary, the work offers a practical and theoretically sound alternative to traditional ABC. By investing computational effort once to learn a flexible, non‑parametric model of the likelihood, researchers can subsequently perform a wide range of inferential tasks with minimal additional cost. This paradigm is especially attractive for scientific domains where model simulations are expensive and where exploring multiple priors or performing frequentist checks is essential. Future extensions could incorporate adaptive proposal schemes, higher‑order regression models, or deep‑learning based density estimators, further broadening the applicability of likelihood‑based ABC.


Comments & Academic Discussion

Loading comments...

Leave a Comment