Bayesian decomposition using Besov priors

Bayesian decomposition using Besov priors
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In many inverse problems, the unknown is composed of multiple components with different regularities, for example, in imaging problems, where the unknown can have both rough and smooth features. We investigate linear Bayesian inverse problems, where the unknown consists of two components: one smooth and one piecewise constant. We model the unknown as a sum of two components and assign individual priors on each component to impose the assumed behavior. We propose and compare two prior models: (i) a combination of a Haar wavelet-based Besov prior and a smoothing Besov prior, and (ii) a hierarchical Gaussian prior on the gradient coupled with a smoothing Besov prior. To achieve a balanced reconstruction, we place hyperpriors on the prior parameters and jointly infer both the components and the hyperparameters. We propose Gibbs sampling schemes for posterior inference in both prior models. We demonstrate the capabilities of our approach on 1D and 2D deconvolution problems, where the unknown consists of smooth parts with jumps. The numerical results indicate that our methods improve the reconstruction quality compared to single-prior approaches and that the prior parameters can be successfully estimated to yield a balanced decomposition.


💡 Research Summary

This paper addresses linear Bayesian inverse problems in which the unknown signal is naturally composed of two distinct components: a smooth part and a piecewise‑constant part. The authors model the unknown as a sum f = g + h, where g captures jumps and discontinuities while h captures smooth variations. To enforce these structural assumptions they assign separate Besov priors to each component. The first component g receives a Haar‑wavelet based Besov prior, which promotes sparsity of wavelet coefficients and therefore sharp edges. The second component h receives a Besov prior built on a smooth wavelet basis (e.g., a high‑order Daubechies wavelet), which penalises high‑frequency coefficients and yields smooth reconstructions.

Two Bayesian formulations are proposed. (i) Two‑Besov decomposition: the posterior combines the Gaussian likelihood (with known noise variance) and the two independent Besov priors, each weighted by a strength parameter λ_g and λ_h. When λ_g and λ_h are fixed, the posterior is differentiable almost everywhere, allowing the use of the No‑U‑Turn Sampler (NUTS), an adaptive Hamiltonian Monte Carlo method, to draw samples efficiently. (ii) Hierarchical model: the authors introduce hyper‑parameters for the prior strengths and, for the jump component, replace the Haar Besov prior by a hierarchical Gaussian prior on the discrete gradient (a sparsity‑inducing Laplacian‑type prior). The smooth component still uses a Besov prior. Hyper‑priors (e.g., Gamma distributions) are placed on all strength parameters, and a Gibbs sampler is derived. Conditional distributions are either analytically tractable or can be sampled using a Randomize‑Then‑Optimize (RTO) step, ensuring practical computation even though the full posterior is high‑dimensional and potentially multimodal.

A key technical challenge discussed is the high mutual coherence between the two wavelet bases, which can cause identifiability problems when both λ_g and λ_h are treated as unknown. The hierarchical formulation mitigates this by coupling the hyper‑parameters with informative priors and by using Gibbs updates that respect the conditional structure of the model.

The methods are evaluated on synthetic one‑dimensional and two‑dimensional deconvolution problems. In each case the true signal contains smooth regions interrupted by abrupt jumps. The authors compare three approaches: (a) a single Besov prior (either smooth or piecewise‑constant), (b) the two‑Besov model with fixed λ’s, and (c) the hierarchical model with λ’s inferred. Quantitative metrics (PSNR, SSIM) show that both two‑component approaches outperform the single‑prior baseline by 2–3 dB in PSNR and noticeable SSIM gains. Visual inspection confirms that the reconstructed g and h components are well separated, and the hierarchical model automatically balances the contributions of the two priors, eliminating the need for manual tuning.

The paper’s contributions are threefold: (1) a Bayesian decomposition framework that simultaneously recovers multiple structural components and quantifies posterior uncertainty; (2) a novel combination of two distinct Besov priors within a single posterior, and a hierarchical extension that enables automatic hyper‑parameter learning; (3) practical sampling algorithms (NUTS and a tailored Gibbs sampler with RTO) that make inference tractable for high‑dimensional image problems. The authors suggest future work on non‑linear forward models, extensions to more than two components, and integration with learned wavelet dictionaries or deep priors. Overall, the study demonstrates that Besov‑based Bayesian priors, when coupled with hierarchical modeling and modern MCMC techniques, provide a powerful and flexible tool for decomposing and reconstructing complex signals while rigorously accounting for uncertainty.


Comments & Academic Discussion

Loading comments...

Leave a Comment