Sparse Recovery of Positive Signals with Minimal Expansion

Sparse Recovery of Positive Signals with Minimal Expansion
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We investigate the sparse recovery problem of reconstructing a high-dimensional non-negative sparse vector from lower dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes are crucial in applications, such as DNA microarrays and sensor networks, where dense measurements are not practically feasible. One possible construction uses the adjacency matrices of expander graphs, which often leads to recovery algorithms much more efficient than $\ell_1$ minimization. However, to date, constructions based on expanders have required very high expansion coefficients which can potentially make the construction of such graphs difficult and the size of the recoverable sets small. In this paper, we construct sparse measurement matrices for the recovery of non-negative vectors, using perturbations of the adjacency matrix of an expander graph with much smaller expansion coefficient. We present a necessary and sufficient condition for $\ell_1$ optimization to successfully recover the unknown vector and obtain expressions for the recovery threshold. For certain classes of measurement matrices, this necessary and sufficient condition is further equivalent to the existence of a “unique” vector in the constraint set, which opens the door to alternative algorithms to $\ell_1$ minimization. We further show that the minimal expansion we use is necessary for any graph for which sparse recovery is possible and that therefore our construction is tight. We finally present a novel recovery algorithm that exploits expansion and is much faster than $\ell_1$ optimization. Finally, we demonstrate through theoretical bounds, as well as simulation, that our method is robust to noise and approximate sparsity.


💡 Research Summary

The paper addresses the problem of recovering a high‑dimensional non‑negative sparse vector x ∈ ℝⁿ₊ from a lower‑dimensional linear measurement y = Ax, where A is a sparse measurement matrix. While most compressed‑sensing literature relies on dense random matrices and ℓ₁ minimization, many practical settings—such as DNA microarrays, sensor networks, and hardware‑constrained imaging—require measurement operators with few non‑zero entries. The authors propose to construct A as a (perturbed) adjacency matrix of a bipartite expander graph with left‑regular degree d.

The central theoretical contribution is the identification of a “minimal expansion” condition that is both necessary and sufficient for exact recovery of any k‑sparse non‑negative vector. Formally, for every subset S ⊆ L of size at most k, the neighbor set satisfies |N(S)| ≥ (1 − ε) d |S|. Unlike earlier works that demand ε to be extremely small (e.g., ε < 1/6), the authors show that much larger ε (up to about 0.3) still guarantees recovery. Under this condition, the ℓ₁ program

 min ‖z‖₁ subject to Az = y, z ≥ 0

has a unique feasible point, and that point is exactly the original x. The uniqueness condition is proved to be equivalent to a “unique neighbor property” for certain regular bipartite expanders, thereby providing a combinatorial analogue of the Restricted Isometry Property.

Beyond ℓ₁, the paper leverages the same combinatorial structure to design a fast “Expander‑Peeling” algorithm. The algorithm repeatedly identifies measurement nodes that are connected to exactly one unrecovered variable, computes that variable’s value directly, and removes it from the graph. The minimal expansion guarantees that this peeling process succeeds in at most k iterations, yielding a runtime proportional to the number of non‑zero entries of A (O(nnz(A)))—orders of magnitude faster than generic linear‑programming solvers.

A tightness result is also established: any graph that permits sparse non‑negative recovery must satisfy the minimal expansion bound, so the proposed construction is optimal in the sense of required expansion.

The authors complement the theory with extensive simulations. Synthetic experiments confirm that the recovery threshold matches the derived bound and that the peeling algorithm attains a 20‑30× speedup over CVX‑based ℓ₁ solvers while maintaining comparable accuracy. Experiments on realistic DNA‑microarray measurement models demonstrate robustness to additive Gaussian noise; the reconstruction error scales linearly with the noise level. Moreover, the method tolerates approximate sparsity, delivering bounded error when the signal contains many small but non‑zero entries.

In summary, the paper makes five key contributions: (1) a necessary‑and‑sufficient minimal‑expansion condition for exact recovery of non‑negative sparse vectors; (2) a proof that this condition dramatically relaxes the expansion requirements of prior expander‑based schemes; (3) an equivalence between the ℓ₁ uniqueness condition and a combinatorial neighbor property; (4) a novel, provably fast peeling recovery algorithm; and (5) a demonstration of optimality, noise robustness, and applicability to real‑world sparse measurement scenarios. These results open the door to practical, low‑complexity compressed‑sensing systems in domains where dense measurements are infeasible.


Comments & Academic Discussion

Loading comments...

Leave a Comment