Approche variationnelle pour le calcul bayesien dans les probl`emes inverses en imagerie
In a non supervised Bayesian estimation approach for inverse problems in imaging systems, one tries to estimate jointly the unknown image pixels $\fb$ and the hyperparameters $\thetab$. This is, in general, done through the joint posterior law $p(\fb,\thetab|\gb)$. The expression of this joint law is often very complex and its exploration through sampling and computation of the point estimators such as MAP and posterior means need either optimization of non convex criteria or int'egration of non Gaussian and multi variate probability laws. In any of these cases, we need to do approximations. We had explored before the possibilities of Laplace approximation and sampling by MCMC. In this paper, we explore the possibility of approximating this joint law by a separable one in $\fb$ and in $\thetab$. This gives the possibility of developing iterative algorithms with more reasonable computational cost, in particular, if the approximating laws are choosed in the exponential conjugate families. The main objective of this paper is to give details of different algorithms we obtain with different choices of these families.
💡 Research Summary
The paper addresses the computational difficulty inherent in Bayesian formulations of inverse imaging problems, where one seeks a joint posterior distribution over the unknown image pixels $f$ and a set of hyper‑parameters $\theta$ given observed data $g$. The exact posterior $p(f,\theta|g)$ is typically intractable because $f$ lives in a very high‑dimensional space and the likelihood together with the priors leads to non‑Gaussian, multimodal, and highly coupled probability laws. Classical strategies such as Laplace approximation and Markov‑chain Monte‑Carlo (MCMC) sampling either introduce a strong Gaussian bias (Laplace) or demand prohibitive computational resources (MCMC) when the dimensionality grows.
To overcome these limitations, the authors propose a variational Bayesian (VB) framework that approximates the true posterior by a factorised distribution $q(f,\theta)=q(f),q(\theta)$. This mean‑field assumption decouples the image variables from the hyper‑parameters, allowing each factor to be optimised independently while still maximising a global objective: the Evidence Lower BOund (ELBO). Crucially, the paper advocates selecting the functional forms of $q(f)$ and $q(\theta)$ from exponential‑family conjugate families. For example, $q(f)$ is taken as a multivariate Gaussian, while hyper‑parameters such as precision or noise variance are modelled with Gamma (or inverse‑Gamma) distributions. This choice guarantees that the required expectations and sufficient statistics can be computed in closed form, leading to simple update equations.
The derivation proceeds by writing the ELBO \
Comments & Academic Discussion
Loading comments...
Leave a Comment