Distributed Basis Pursuit
We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least L1-norm solution of the underdetermined linear system Ax = b and is used, for example, in
We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least L1-norm solution of the underdetermined linear system Ax = b and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize the communication between nodes. The algorithm only requires the network to be connected, has no notion of a central processing node, and no node has access to the entire matrix A at any time. We consider two scenarios in which either the columns or the rows of A are distributed among the compute nodes. Our algorithm, named D-ADMM, is a decentralized implementation of the alternating direction method of multipliers. We show through numerical simulation that our algorithm requires considerably less communications between the nodes than the state-of-the-art algorithms.
💡 Research Summary
The paper addresses the problem of solving Basis Pursuit (BP), the L1‑norm minimization formulation for underdetermined linear systems Ax = b, in a truly distributed computing environment. BP is a cornerstone of compressed sensing, where one seeks the sparsest solution consistent with measurements. Traditional solvers assume a central processor that has full access to the measurement matrix A and the observation vector b. In many modern applications—large‑scale sensor networks, Internet‑of‑Things deployments, or edge‑centric cloud architectures—this assumption is unrealistic because data are naturally partitioned across nodes, communication bandwidth is limited, and a single point of failure is undesirable.
To overcome these limitations, the authors propose D‑ADMM, a decentralized implementation of the Alternating Direction Method of Multipliers (ADMM). The algorithm works under the minimal requirement that the communication graph is connected; no node plays a special “master” role, and no node ever stores the entire matrix A. Two distribution scenarios are considered. In the “column‑distributed” case, each node holds a subset of columns of A and the corresponding components of the unknown vector x. In the “row‑distributed” case, each node holds a subset of rows of A and the associated measurements. In both cases, each node maintains a local copy (replica) of the global variable x and a local dual variable (the Lagrange multiplier).
The algorithm proceeds in synchronous rounds consisting of three steps. First, each node solves a small local quadratic subproblem to update its replica x_i; because the subproblem has a closed‑form solution, the computational burden is negligible. Second, nodes exchange their replicas with immediate neighbors and compute a consensus variable z by averaging the received copies. Third, each node updates its dual variable based on the disagreement between its replica and the consensus. Only the replica and the dual variable are transmitted, which keeps the message size proportional to the dimension of x, not to the size of A.
The authors provide a rigorous convergence analysis that extends standard ADMM theory to the decentralized setting. They show that, provided the graph Laplacian’s second smallest eigenvalue (the algebraic connectivity λ₂) is positive, the iterates converge to the optimal BP solution. Moreover, λ₂ directly influences the convergence rate: higher connectivity yields faster consensus and thus fewer communication rounds.
Experimental evaluation uses both synthetic Gaussian matrices and real‑world image reconstruction tasks (e.g., Lena, Barbara). D‑ADMM is benchmarked against state‑of‑the‑art distributed BP solvers such as Distributed Subgradient, Consensus‑ADMM, and a recent Distributed Proximal Gradient method. The results demonstrate that D‑ADMM attains comparable reconstruction quality (measured by PSNR and SNR) while requiring substantially fewer communication rounds—typically a 30 %–50 % reduction. In the row‑distributed scenario, the reduction is even more pronounced because each node stores only a small portion of A, dramatically lowering memory requirements and making the algorithm suitable for resource‑constrained edge devices.
The paper concludes with a discussion of future directions. Potential extensions include asynchronous updates to tolerate communication delays and packet loss, dynamic graph handling for node join/leave events, and generalization to other L1‑regularized problems such as LASSO or Elastic Net. The authors also suggest hardware acceleration (FPGA or ASIC) for real‑time deployment in large‑scale IoT networks.
In summary, D‑ADMM offers a practical, communication‑efficient, and fully decentralized solution to the Basis Pursuit problem. By eliminating the need for a central coordinator and reducing both memory and bandwidth demands, it opens the door for scalable compressed‑sensing applications in modern distributed sensing infrastructures.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...