A Gibbs posterior sampler for inverse problem based on prior diffusion model

A Gibbs posterior sampler for inverse problem based on prior diffusion model
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper addresses the issue of inversion in cases where (1) the observation system is modeled by a linear transformation and additive noise, (2) the problem is ill-posed and regularization is introduced in a Bayesian framework by an a prior density, and (3) the latter is modeled by a diffusion process adjusted on an available large set of examples. In this context, it is known that the issue of posterior sampling is a thorny one. This paper introduces a Gibbs algorithm. It appears that this avenue has not been explored, and we show that this approach is particularly effective and remarkably simple. In addition, it offers a guarantee of convergence in a clearly identified situation. The results are clearly confirmed by numerical simulations.


💡 Research Summary

**
The paper tackles the classic Bayesian inverse‑problem setting in which the forward model is linear with additive Gaussian noise, y = H x₀ + e, and the prior on the unknown image x₀ is learned from a large collection of example images. The prior is represented by a diffusion model: a forward stochastic process that gradually adds noise to clean images and a backward stochastic process that denoises noisy images. Both processes are Markovian and Gaussian; the forward transitions are p⁺ₜ|t‑1(xₜ|xₜ₋₁) = 𝒩(xₜ; kₜ xₜ₋₁, v⁺ₜ I) and the backward transitions are p⁻ₜ|t+1(xₜ|xₜ₊₁) = 𝒩(xₜ; µ_θₜ(xₜ₊₁), v⁻ₜ I), where µ_θₜ is a time‑conditioned denoiser implemented by a neural network. During training, the Kullback‑Leibler divergence between the joint forward and backward distributions is minimized, forcing the two to share the same marginal distributions (the data‑driven prior π₀ and a standard Gaussian for the terminal noise).

After training, the goal is to sample from the posterior π₀:T(x₀:T | y) ∝ f(y | x₀) π₀:T(x₀:T). Existing works either rely on ancestral sampling of the diffusion prior (which ignores the likelihood), or on sophisticated Sequential Monte Carlo (SMC) or annealed MCMC schemes that are computationally heavy and often only approximate the true posterior. The authors propose a fundamentally different approach: a Gibbs sampler that iteratively updates each latent variable xₜ conditioned on all the others and the observation y. Crucially, because both forward and backward processes are Gaussian, every conditional distribution is also Gaussian and can be sampled in closed form.

The conditional for the image of interest x₀ combines the backward transition p⁻₀|1(x₀|x₁) and the likelihood f(y|x₀). This yields a Gaussian with precision Γ₀ = HᵀH/vₑ + I/v⁻₀ and mean ε₀ = Γ₀⁻¹(Hᵀy/vₑ + µ₀(x₁)/v⁻₀). The mean can be computed efficiently in the Fourier domain using FFT, because Γ₀ is diagonal there. For intermediate latents xₜ (1 ≤ t ≤ T‑1) the conditional is proportional to the product of two Gaussian transitions, giving a Gaussian with precision γₜ = 1/v⁺ₜ + k_{t+1}²/v⁺{t+1} and mean εₜ = γₜ⁻¹(kₜ x{t‑1}/v⁺ₜ + k_{t+1} x_{t+1}/v⁺_{t+1}). The final latent x_T has a simple Gaussian conditional derived from the forward transition only. Thus each Gibbs sweep requires (i) a single forward pass through the denoising network to compute µ₀(x₁) for x₀, and (ii) a set of linear/Gaussian updates that are either element‑wise or FFT‑based. No Metropolis‑Hastings steps or particle resampling are needed.

Convergence is argued informally: if the learned forward and backward joint distributions were identical, the Gibbs sampler would be a standard block‑Gibbs sampler on a fully specified joint Gaussian, guaranteeing geometric convergence. In practice the training objective forces the two distributions to be close, and empirical results show rapid mixing. A formal proof for the mismatched case is left as future work.

The experimental section validates the method on a toy deconvolution problem built from MNIST digits. Images are 32 × 32, blurred with a 3 × 3 box kernel and corrupted with σₑ = 0.05 Gaussian noise. The diffusion prior uses T = 10 time steps. The Gibbs sampler is run for 1,030 iterations; each iteration updates all latents sequentially. On a 3.8 GHz Intel i7 CPU the total runtime is 53 seconds, with ~85 % of the time spent inside the neural network. Results show:

  • Pixel‑wise posterior means within 0.07 absolute error of the ground truth.
  • Posterior standard deviations (PSDs) that contain the true value in >95 % of cases.
  • Visual reconstructions that are virtually indistinguishable from the original clean images.
  • Rapid convergence for some pixels (burn‑in ≈ 10 iterations) and reasonable mixing for others (≈ 300 iterations), confirmed by trace plots and autocorrelation analysis.

The authors highlight several practical advantages: (1) no hyper‑parameter tuning beyond the number of Gibbs iterations, (2) scalability to larger images because each iteration requires only one network evaluation, (3) natural quantification of uncertainty via posterior samples, and (4) the possibility of parallel batch sampling on GPUs.

In summary, the paper introduces a novel Gibbs‑based posterior sampler (G‑DPS) for diffusion‑prior Bayesian inverse problems. By exploiting the Gaussian structure of both forward and backward diffusion processes, the algorithm reduces posterior sampling to a sequence of closed‑form Gaussian draws, achieving high computational efficiency, provable (under mild assumptions) convergence, and accurate uncertainty quantification. The work opens a promising direction for integrating learned diffusion priors into rigorous Bayesian inference pipelines, while suggesting future research on formal convergence guarantees when the forward and backward models are only approximately matched, and on extending the approach to high‑resolution, multi‑channel imaging tasks.


Comments & Academic Discussion

Loading comments...

Leave a Comment