Fast Sparse Decomposition by Iterative Detection-Estimation
Finding sparse solutions of underdetermined systems of linear equations is a fundamental problem in signal processing and statistics which has become a subject of interest in recent years. In general,
Finding sparse solutions of underdetermined systems of linear equations is a fundamental problem in signal processing and statistics which has become a subject of interest in recent years. In general, these systems have infinitely many solutions. However, it may be shown that sufficiently sparse solutions may be identified uniquely. In other words, the corresponding linear transformation will be invertible if we restrict its domain to sufficiently sparse vectors. This property may be used, for example, to solve the underdetermined Blind Source Separation (BSS) problem, or to find sparse representation of a signal in an overcomplete' dictionary of primitive elements (i.e., the so-called atomic decomposition). The main drawback of current methods of finding sparse solutions is their computational complexity. In this paper, we will show that by detecting active’ components of the (potential) solution, i.e., those components having a considerable value, a framework for fast solution of the problem may be devised. The idea leads to a family of algorithms, called `Iterative Detection-Estimation (IDE)’, which converge to the solution by successive detection and estimation of its active part. Comparing the performance of IDE(s) with one of the most successful method to date, which is based on Linear Programming (LP), an improvement in speed of about two to three orders of magnitude is observed.
💡 Research Summary
The paper addresses the classic problem of finding sparse solutions to underdetermined linear systems Ax = b, a task that underlies many modern signal‑processing and statistical applications such as blind source separation, compressed sensing, and atomic decomposition in overcomplete dictionaries. While it is well known that sufficiently sparse vectors are uniquely identifiable, existing algorithms that exploit this fact—most notably ℓ₁‑minimization approaches like Basis Pursuit and their linear‑programming (LP) formulations—suffer from prohibitive computational cost when the dimensions become large.
To overcome this bottleneck the authors propose a new framework called Iterative Detection‑Estimation (IDE). The central idea is to split the recovery process into two alternating stages. In the Detection stage the algorithm evaluates the correlation between the current residual r = b − A x̂ and each column of A. Large inner products indicate that the corresponding coefficient is likely to be “active” (i.e., non‑zero). By applying a threshold or selecting the top‑k correlations, a candidate support set S is built. In the Estimation stage the problem is reduced to a small sub‑system that involves only the columns indexed by S. The sub‑problem
\
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...