Message Passing Algorithms for Compressed Sensing
Compressed sensing aims to undersample certain high-dimensional signals, yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity-undersampling tradeoff is achieved when reconstructing by convex optimization – which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity-undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity-undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity-undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity-undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this new, apparently very different theoretical formalism.
💡 Research Summary
Compressed sensing (CS) seeks to recover high‑dimensional signals from far fewer linear measurements than traditional Nyquist sampling would require. The key to successful recovery is sparsity: the unknown vector x₀ has only a small number k of non‑zero entries when expressed in a known basis. Theoretical work has shown that, for random measurement matrices with i.i.d. Gaussian entries, there exists a sharp phase transition in the (δ = M/N, ρ = k/M) plane. Below the transition, exact recovery is possible with overwhelming probability; above it, recovery fails for almost all sparse signals. Convex ℓ₁‑minimization (Basis Pursuit, LASSO) achieves the optimal trade‑off, but solving a large‑scale convex program is computationally expensive, especially when N is in the tens or hundreds of thousands.
Iterative thresholding (IT) algorithms—simple schemes that alternate a linear step with a pointwise non‑linear shrinkage—have been proposed as fast alternatives. A typical IT iteration takes the form
x^{t+1}=η(Aᵀr^{t}+x^{t}), r^{t}=y−Ax^{t},
where η is a soft‑ or hard‑thresholding function. While each iteration costs O(NM) and is trivially parallelizable, the empirical sparsity‑undersampling trade‑off of IT is markedly inferior to that of ℓ₁‑minimization. The main culprit is the buildup of statistical dependencies between the residual r^{t} and the current estimate x^{t}, which causes the effective noise seen by the thresholding step to deviate from the assumed i.i.d. Gaussian model.
The present paper introduces a “cost‑less” modification to IT that restores the optimal trade‑off. Inspired by belief propagation on dense factor graphs, the authors add an Onsager‑type correction term to the residual update, yielding the Approximate Message Passing (AMP) algorithm:
r^{t}=y−Ax^{t}+ (1/δ) r^{t−1}⟨η′(Aᵀr^{t−1}+x^{t−1})⟩,
x^{t+1}=η(Aᵀr^{t}+x^{t}).
The extra term precisely cancels the leading order correlation introduced in the previous iteration, making the effective noise at each step asymptotically Gaussian and independent of the current estimate. Consequently, the dynamics of AMP can be captured by a scalar recursion known as state evolution (SE).
State evolution predicts the mean‑square error (MSE) τ_{t} at iteration t via
τ_{t+1}=σ_w² + (1/δ) E
Comments & Academic Discussion
Loading comments...
Leave a Comment