Alternating Subspace Method for Sparse Recovery of Signals

Alternating Subspace Method for Sparse Recovery of Signals
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Numerous renowned algorithms for tackling the compressed sensing problem employ an alternating strategy, which typically involves data matching in one module and denoising in another. We present a novel approach, the Alternating Subspace Method (ASM), which integrates the principles of the greedy methods (e.g., the orthogonal matching pursuit type methods) and the splitting methods (e.g., the approximate message passing type methods). Crucially, ASM enhances the splitting method by achieving fidelity in a subspace-restricted fashion. \textcolor{black}{We reveal that such a restriction strategy guarantees global convergence via proximal residual control and establish its local geometric convergence on the LASSO problem.} Numerical experiments on the LASSO, channel estimation, and dynamic compressed sensing problems demonstrate its high convergence rate and its capacity to incorporate different prior distributions. Overall, the proposed method is promising in terms of efficiency, accuracy, and flexibility, and has the potential to be competitive in different sparse recovery applications.


💡 Research Summary

The paper introduces the Alternating Subspace Method (ASM), a novel algorithm for solving sparse linear inverse problems such as the LASSO, by embedding subspace‑restricted data‑fidelity updates into the Alternating Direction Method of Multipliers (ADMM) framework. Traditional ADMM solves a regularized least‑squares subproblem in the full ambient space at each iteration, which becomes computationally prohibitive when the dimension N is large. In contrast, greedy algorithms like Orthogonal Matching Pursuit (OMP) exploit the sparsity pattern to restrict computations to a low‑dimensional support set. ASM bridges these two worlds: it uses the soft‑thresholding step of ADMM (or more generally any proximal denoiser) to generate a support set (E_k) and then solves the data‑fidelity subproblem only on the subspace spanned by the columns of (A) indexed by (E_k).

A key technical contribution is the rigorous analysis showing that this restriction does not break the global convergence guarantees of ADMM. The authors reveal an intrinsic proximal‑gradient structure hidden in ADMM (equivalently, a Douglas‑Rachford splitting view) and prove that, provided the multiplier updates are appropriately averaged (via a parameter (d)), the proximal residual can be controlled. This “proximal residual control” ensures that the sequence of iterates remains faithful to the original ADMM trajectory while enjoying a dramatically reduced per‑iteration cost. Moreover, for the LASSO problem they establish a local geometric (linear) convergence rate, demonstrating that once the support estimate is close to the true support, ASM converges at least as fast as the full‑space method.

Algorithmically, ASM proceeds as follows: (1) compute a gradient‑type intermediate variable (\mu_k = x_{k-1}^{ave} + v_k A^T(y - A x_{k-1}^{ave})); (2) apply a denoiser (D(\cdot)) (soft‑thresholding or any plug‑and‑play prior) to obtain (z_k); (3) extract the support (E_k) from (z_k); (4) restrict (\mu_k) and (z_k) to the subspace defined by (E_k); (5) solve a reduced least‑squares problem (the subspace‑restricted ( \hat L_{k}^{v}) step) to obtain (\hat x_{k+1}); (6) lift (\hat x_{k+1}) back to the full space by zero‑padding; (7) update the averaged iterate (x_k^{ave}). The method retains the full‑space multiplier structure, which prevents premature freezing of coordinates that should later become active.

The paper also discusses practical enhancements. Low‑rank updates of the restricted matrix (\hat A_k) can be maintained efficiently, and the averaging parameter (d) can be tuned adaptively to balance initial rapid progress with later high‑accuracy refinement. Importantly, ASM is compatible with the Plug‑and‑Play (PnP) paradigm: the denoising operator (D) can be replaced by sophisticated priors such as Bayesian MAP estimators, Markov‑chain‑based correlation models, or learned neural denoisers (e.g., DnCNN). Experiments with these alternatives show consistent performance gains over standard ADMM.

Empirical evaluation covers three domains. In a synthetic LASSO setting (M=200, N=500, (\lambda=10^{-3})) over 500 Monte‑Carlo runs, ASM reaches a KKT residual of (10^{-8}) with roughly 30 % fewer iterations than ADMM and comparable speed to the semi‑smooth Newton augmented Lagrangian (SSNAL) method, which is asymptotically super‑linear. In a wireless channel‑estimation scenario, ASM outperforms Approximate Message Passing (AMP) in both convergence speed and mean‑square error. In dynamic compressed sensing, where the underlying sparse signal evolves over time, ASM maintains stable tracking while OMP‑based or full‑space ADMM methods either diverge or become too slow.

Overall, the Alternating Subspace Method offers a compelling blend of computational efficiency, strong theoretical guarantees, and flexibility to incorporate a wide range of priors. By restricting the costly data‑fidelity step to a dynamically updated low‑dimensional subspace while preserving the robust convergence properties of ADMM, ASM positions itself as a competitive alternative for large‑scale sparse recovery tasks in signal processing, communications, and beyond.


Comments & Academic Discussion

Loading comments...

Leave a Comment