CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix-vector multiplies with the sampling matrix. For many cases of interest, the running time is just O(N*log^2(N)), where N is the length of the signal.
💡 Research Summary
The paper introduces CoSaMP (Compressive Sampling Matching Pursuit), a new iterative algorithm for recovering sparse or compressible signals from a limited number of possibly noisy linear measurements. The authors place the algorithm within the framework of compressed sensing, where the central theoretical tool is the Restricted Isometry Property (RIP). A measurement matrix Φ satisfies the RIP of order r with constant δ_r if (1‑δ_r)‖x‖₂² ≤ ‖Φx‖₂² ≤ (1+δ_r)‖x‖₂² for every vector x having at most r non‑zero entries. When δ_{2s} is sufficiently small (a universal constant), Φ approximately preserves the geometry of all s‑sparse signals, which makes stable inversion possible.
CoSaMP operates in five steps per iteration: (1) Identification – compute a proxy y = Φᵗr for the current residual r = u – Φa and select the 2s largest‑magnitude entries of y; (2) Support Merger – unite these indices with the support of the current estimate a, forming a candidate set Ω of size at most 3s (or 4s depending on implementation); (3) Estimation – solve a least‑squares problem restricted to Ω, i.e., find a_Ω = argmin‖Φ_Ω a_Ω – u‖₂; (4) Pruning – keep only the s largest entries of the solution and zero out the rest, producing the next approximation a; (5) Sample Update – recompute the residual r = u – Φa. The process repeats until a stopping criterion is met (either a fixed number of iterations or a residual norm below a prescribed tolerance η).
The main theoretical result (Theorem A) states that, under the RIP condition δ_{2s} ≤ c (c a small constant), CoSaMP returns a 2s‑sparse vector a satisfying
‖x – a‖₂ ≤ C·max{ η, (1/√s)‖x – x_s‖₁ } + ‖e‖₂,
where x is the true signal, x_s its best s‑sparse approximation, e the measurement noise, and C a universal constant. In the noiseless case the error bound reduces to a term proportional to (1/√s)‖x – x_s‖₁, which is known to be optimal for any algorithm that uses only linear measurements. Moreover, each iteration reduces the residual norm by a constant factor (e.g., halves it), yielding a geometric convergence rate. Consequently, the total running time is O(L·log(‖x‖₂/η)), where L denotes the cost of a single matrix‑vector multiplication with Φ or its adjoint. For matrices that admit fast multiplication (e.g., partial Fourier matrices, where L = O(N log N)), the overall complexity becomes O(N log N·log(‖x‖₂/η)), which is essentially linear up to polylogarithmic factors. Memory usage is O(N), as the algorithm only stores the current estimate, the residual, and a few auxiliary vectors.
The paper also discusses practical implementation issues. Identification can be performed efficiently using a heap or partial selection algorithm to find the top‑2s entries of the proxy. The least‑squares subproblem on Ω can be solved via QR decomposition, conjugate‑gradient, or by exploiting the structure of Φ (e.g., using FFT for partial Fourier). The authors provide bounds on the number of iterations required when exact arithmetic is assumed, and they outline several stopping rules that are robust in finite‑precision environments.
Compared with existing approaches, CoSaMP combines the best of both worlds: it achieves the same error guarantees as convex‑optimization methods (Basis Pursuit, L₁ minimization) while retaining the low per‑iteration cost of greedy algorithms such as Orthogonal Matching Pursuit (OMP). Unlike OMP, which adds only one atom per iteration, CoSaMP adds a batch of 2s atoms, thereby accelerating convergence. Unlike pure convex methods, it does not require solving a large‑scale linear program or performing many inner iterations, making it far more scalable for high‑dimensional problems.
Experimental results (summarized in the paper’s appendix) confirm the theoretical predictions: for Gaussian and partial Fourier measurement matrices, CoSaMP recovers s‑sparse signals with high probability using m = O(s log N) measurements, and it does so in significantly fewer iterations and less CPU time than OMP or Basis Pursuit solvers. The algorithm also exhibits graceful degradation in the presence of measurement noise, with the reconstruction error scaling linearly with the noise level.
In conclusion, CoSaMP provides a practical, provably optimal algorithm for compressed sensing reconstruction. Its reliance only on matrix‑vector multiplies makes it suitable for large‑scale applications such as MRI, radar, and sensor networks where fast transforms are available. The paper suggests future directions including extensions to structured sparsity models, adaptive measurement schemes, and hardware‑accelerated implementations (GPU/FPGA).
Comments & Academic Discussion
Loading comments...
Leave a Comment