A Comparative Study of MAP and LMMSE Estimators for Blind Inverse Problems
Maximum-a-posteriori (MAP) approaches are an effective framework for inverse problems with known forward operators, particularly when combined with expressive priors and careful parameter selection. In blind settings, however, their use becomes significantly less stable due to the inherent non-convexity of the problem and the potential non-identifiability of the solutions. (Linear) minimum mean square error (MMSE) estimators provide a compelling alternative that can circumvent these limitations. In this work, we study synthetic two-dimensional blind deconvolution problems under fully controlled conditions, with complete prior knowledge of both the signal and kernel distributions. We compare tailored MAP algorithms with simple LMMSE estimators whose functional form is closely related to that of an optimal Tikhonov estimator. Our results show that, even in these highly controlled settings, MAP methods remain unstable and require extensive parameter tuning, whereas the LMMSE estimator yields a robust and reliable baseline. Moreover, we demonstrate empirically that the LMMSE solution can serve as an effective initialization for MAP approaches, improving their performance and reducing sensitivity to regularization parameters, thereby opening the door to future theoretical and practical developments.
💡 Research Summary
This paper presents a systematic comparison between Maximum‑a‑Posteriori (MAP) and Linear Minimum Mean Square Error (LMMSE) estimators for blind inverse problems, focusing on a synthetic two‑dimensional blind deconvolution setting where the full statistical models of both the latent image and the blur kernel are known. The authors generate 32×32 images by drawing sparse coefficients from a Laplace distribution in a DCT basis, and blur them with 15×15 isotropic Gaussian kernels whose variance σ follows a Gamma distribution. Additive white Gaussian noise with relatively high variance (cε = 0.0009) is added to the observations, yielding a challenging yet fully controlled dataset of 50 (x, h, y) triples.
In the MAP framework, the posterior p(x,h|y) is maximized via an energy minimization that combines an ℓ2 data‑fidelity term, an ℓ1 sparsity prior on the DCT coefficients (Rα), and a kernel regularizer. Two kernel priors are examined: (i) a Gamma‑derived prior on the Gaussian spread σ (denoted MAP σ) and (ii) a simple smoothness penalty ∥∇h∥₂ (denoted MAP h). Because the blind problem is intrinsically non‑convex, the authors employ an alternating minimization scheme: proximal gradient descent on the coefficient vector α for several inner steps, followed by a gradient step on the kernel (or on σ) with projection onto the simplex when required. Regularization weights λ_α and λ_h are treated as free hyper‑parameters and tuned by exhaustive grid search.
The LMMSE estimator is derived analytically for the blind setting, yielding closed‑form linear estimators b_x = E
Comments & Academic Discussion
Loading comments...
Leave a Comment