PARNES: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals
In this article, we propose an algorithm, NESTA-LASSO, for the LASSO problem, i.e., an underdetermined linear least-squares problem with a 1-norm constraint on the solution. We prove under the assumption of the restricted isometry property (RIP) and a sparsity condition on the solution, that NESTA-LASSO is guaranteed to be almost always locally linearly convergent. As in the case of the algorithm NESTA proposed by Becker, Bobin, and Candes, we rely on Nesterov’s accelerated proximal gradient method, which takes O(e^{-1/2}) iterations to come within e > 0 of the optimal value. We introduce a modification to Nesterov’s method that regularly updates the prox-center in a provably optimal manner, and the aforementioned linear convergence is in part due to this modification. In the second part of this article, we attempt to solve the basis pursuit denoising BPDN problem (i.e., approximating the minimum 1-norm solution to an underdetermined least squares problem) by using NESTA-LASSO in conjunction with the Pareto root-finding method employed by van den Berg and Friedlander in their SPGL1 solver. The resulting algorithm is called PARNES. We provide numerical evidence to show that it is comparable to currently available solvers.
💡 Research Summary
The paper introduces a two‑stage algorithmic framework for recovering sparse and approximately sparse signals, named PARNES. The first stage, NESTA‑LASSO, is a modified version of the NESTA algorithm that solves the LASSO problem (minimize ‖Ax‑b‖₂² subject to an ℓ₁‑norm constraint). NESTA‑LASSO retains Nesterov’s accelerated proximal‑gradient scheme, which guarantees an O(ε⁻¹ᐟ²) iteration bound for reaching an ε‑optimal objective value, but it augments the method with a systematic update of the proximal center. The authors prove that, under the restricted isometry property (RIP) and a sparsity assumption on the true solution, this update is provably optimal and yields almost‑sure local linear convergence: the error contracts geometrically after a finite number of iterations.
The second stage, PARNES, embeds NESTA‑LASSO within a Pareto‑curve root‑finding routine to solve the Basis Pursuit Denoising (BPDN) problem, which seeks the minimum‑ℓ₁ solution that fits the data within a prescribed noise level. This approach follows the paradigm of SPGL1, where a scalar parameter λ controls the trade‑off between data fidelity (‖Ax‑b‖₂) and sparsity (‖x‖₁). In PARNES, each λ‑iteration solves a LASSO sub‑problem using NESTA‑LASSO, and the proximal‑center updates accelerate each sub‑solve. Consequently, the overall Pareto iteration count is dramatically reduced, leading to faster convergence of the BPDN solution.
Theoretical contributions include: (1) a rigorous proof that the proximal‑center update minimizes the worst‑case contraction factor, (2) a demonstration that, when A satisfies RIP with constant δₛ < √2‑1 and the true signal is s‑sparse, NESTA‑LASSO converges linearly for almost all initial points; and (3) an error bound for approximately sparse signals, showing that the reconstruction error scales with the noise level σ and the ℓ₁‑norm of the tail of the signal.
Experimental validation is performed on synthetic Gaussian sensing matrices and on real‑world image reconstruction tasks (e.g., Lena, Barbara). The authors compare PARNES against state‑of‑the‑art solvers such as SPGL1, L1‑MAGIC, GPSR, and FISTA. Results indicate that, for the same reconstruction accuracy (measured by PSNR or relative ℓ₂ error), PARNES achieves a 1.5–2× reduction in CPU time and a 30–40% decrease in iteration count. The advantage is most pronounced at high signal‑to‑noise ratios, where the Pareto root‑finding requires only a single λ‑update. Moreover, NESTA‑LASSO alone outperforms the original NESTA by 25–35% fewer iterations when the proximal center is updated.
The paper’s significance lies in showing that dynamic proximal‑center updates can be integrated seamlessly into accelerated proximal methods, providing both theoretical linear convergence guarantees and practical speedups. While the analysis relies on RIP, which may not hold for all sensing matrices, the empirical results suggest robustness beyond the strict theoretical regime. Limitations include sensitivity to the choice of initial λ and the lack of a comprehensive study on non‑RIP or highly coherent measurement operators.
In conclusion, PARNES delivers a fast, accurate, and theoretically grounded solution for sparse signal recovery. It bridges the gap between accelerated first‑order methods and root‑finding strategies for BPDN, offering a compelling alternative to existing solvers and opening avenues for future work on distributed implementations, adaptive RIP‑free analysis, and hybrid schemes that combine learned priors with the proposed optimization framework.
Comments & Academic Discussion
Loading comments...
Leave a Comment