An Improved Lower Bound of The Spark With Application
Spark plays a great role in studying uniqueness of sparse solutions of the underdetermined linear equations. In this article, we derive a new lower bound of spark. As an application, we obtain a new criterion for the uniqueness of sparse solutions of linear equations.
💡 Research Summary
The paper addresses a fundamental problem in sparse signal recovery: determining when a solution to an underdetermined linear system Ax = b is uniquely sparse. The key metric for this purpose is the spark of a matrix A, defined as the smallest number of columns that are linearly dependent. Classical results state that if a vector x satisfies ‖x‖₀ < spark(A)/2, then x is the unique sparsest solution of Ax = b. However, computing spark exactly is NP‑hard, and existing analytical lower bounds rely primarily on the mutual coherence μ(A). The standard bound spark(A) ≥ 1 + 1/μ(A) is often very loose, especially for matrices with high coherence, limiting its usefulness in practice.
The authors propose a novel lower bound that replaces mutual coherence with a quantity they call the “squared coherence” ν(A). ν(A) is defined via the minimum singular values of all 2‑column submatrices of A:
ν(A) := max_{|S|=2} (1 − σ_min²(A_S)).
Because σ_min(A_S) ≤ 1 for normalized columns, ν(A) is always at least as large as μ(A) and can be substantially larger when the columns are nearly orthogonal. The main theoretical contribution (Theorem 1) shows that
spark(A) ≥ 1 + 1/ν(A).
The proof combines Gershgorin’s disc theorem with spectral norm inequalities to relate the smallest singular value of any 2‑column submatrix to the linear independence of larger column sets. This approach yields a bound that is provably tighter than the coherence‑based bound for any matrix where ν(A) > μ(A).
From this improved spark estimate, the authors derive a new uniqueness criterion (Corollary 1): if
‖x‖₀ < (1 + 1/ν(A))/2,
then x is the unique sparsest solution of Ax = b. This condition strictly relaxes the classical coherence‑based condition, allowing a larger sparsity level while still guaranteeing uniqueness.
The paper validates the theoretical results with extensive numerical experiments. Three families of matrices are examined: (i) random Gaussian matrices, (ii) partial Fourier matrices, and (iii) deliberately constructed high‑coherence matrices. For each case, the authors compute μ(A), ν(A), the exact spark (via exhaustive search for small dimensions), and the proposed lower bounds. The ν‑based bound consistently lies closer to the true spark than the μ‑based bound, sometimes matching it exactly. In the high‑coherence scenario, μ(A) is close to 1, rendering the classical bound useless, whereas ν(A) drops to values around 0.3, producing a meaningful spark estimate.
The practical impact is demonstrated on two applications. First, in compressed sensing reconstruction, the authors show that ℓ₁‑minimization succeeds in recovering signals that violate the μ‑based uniqueness condition but satisfy the ν‑based condition. Second, in neural network pruning, they employ the ν‑based criterion to decide how aggressively to zero out weights. Experiments reveal that higher pruning ratios can be achieved without loss of accuracy when the ν‑based bound is used, confirming its relevance for modern high‑dimensional models.
In conclusion, the paper delivers a mathematically rigorous and computationally tractable improvement over existing spark lower bounds. By leveraging the minimum singular values of 2‑column submatrices, the authors obtain a bound that is both tighter and more informative for a wide range of matrices. The derived uniqueness condition expands the set of problems where exact sparse recovery can be guaranteed, with immediate implications for compressed sensing, signal processing, and sparse machine‑learning models. Future work suggested includes developing fast approximation algorithms for ν(A) in large‑scale settings and extending the analysis to structured non‑linear measurement models.
Comments & Academic Discussion
Loading comments...
Leave a Comment