The joint-sparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. This is an extension of the single-measurement-vector (SMV) problem widely studied in compressed sensing. We analyze the recovery properties for two types of recovery algorithms. First, we show that recovery using sum-of-norm minimization cannot exceed the uniform recovery rate of sequential SMV using $\ell_1$ minimization, and that there are problems that can be solved with one approach but not with the other. Second, we analyze the performance of the ReMBo algorithm [M. Mishali and Y. Eldar, IEEE Trans. Sig. Proc., 56 (2008)] in combination with $\ell_1$ minimization, and show how recovery improves as more measurements are taken. From this analysis it follows that having more measurements than number of nonzero rows does not improve the potential theoretical recovery rate.
Deep Dive into Joint-sparse recovery from multiple measurements.
The joint-sparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. This is an extension of the single-measurement-vector (SMV) problem widely studied in compressed sensing. We analyze the recovery properties for two types of recovery algorithms. First, we show that recovery using sum-of-norm minimization cannot exceed the uniform recovery rate of sequential SMV using $\ell_1$ minimization, and that there are problems that can be solved with one approach but not with the other. Second, we analyze the performance of the ReMBo algorithm [M. Mishali and Y. Eldar, IEEE Trans. Sig. Proc., 56 (2008)] in combination with $\ell_1$ minimization, and show how recovery improves as more measurements are taken. From this analysis it follows that having more measurements than number of nonzero rows does not improve the potential theoretical recovery rate.
A problem of central importance in compressed sensing [1,10] is the following: given an m × n matrix A, and a measurement vector b = Ax 0 , recover x 0 . When m < n, this problem is ill-posed, and it is not generally possible to uniquely recover x 0 without some prior information. In many important cases, x 0 is known to be sparse, and it may be appropriate to solve minimize x∈R n
x 0 subject to Ax = b, (1.1) to find the sparsest possible solution. (The 0 -norm • 0 of a vector counts the number of nonzero entries.) If x 0 has fewer than s/2 nonzero entries, where s is the number of nonzeros in the sparsest null-vector of A, then x 0 is the unique solution of this optimization problem [12,19]. The main obstacle of this approach is that it is combinatorial [24], and therefore impractical for all but the smallest problems. To overcome this, Chen et al. [6] introduced basis pursuit:
This convex relaxation, based on the 1 -norm x 1 , can be solved much more efficiently; moreover, under certain conditions [2,11], it yields the same solution as the 0 problem (1.1).
A natural extension of the single-measurement-vector (SMV) problem just described is the multiple-measurement-vector (MMV) problem. Instead of a single measurement b, we are given a set of r measurements b (k) = Ax are jointly sparse-i.e., have nonzero entries at the same locations. Such problems arise in source localization [22], neuromagnetic imaging [8], and equalization of sparsecommunication channels [7,15]. Succinctly, the aim of the MMV problem is to recover X 0 from observations B = AX 0 , where B = [b (1) , b (2) , . . . , b (r) ] is an m × r matrix, and the n × r matrix X 0 is row sparse-i.e., it has nonzero entries in only a small number of rows. The most widely studied approach to the MMV problem is based on solving the convex optimization problem minimize X∈R n×r X p,q subject to AX = B, where the mixed p,q norm of X is defined as
, and X j→ is the (column) vector whose entries form the jth row of X. In particular, Cotter et al. [8] consider p = 2, q ≤ 1; Tropp [28,29] analyzes p = 1, q = ∞; Malioutov et al. [22] and Eldar and Mishali [14] use p = 1, q = 2; and Chen and Huo [5] study p = 1, q ≥ 1. A different approach is given by Mishali and Eldar [23], who propose the ReMBo algorithm, which reduces MMV to a series of SMV problems.
In this paper we study the sum-of-norms problem and the conditions for uniform recovery of all X 0 with a fixed row support, and compare this against recovery using 1,1 . We then construct matrices X 0 that cannot be recovered using 1,1 but for which 1,2 does succeed, and vice versa. We then illustrate the individual recovery properties of 1,1 and 1,2 with empirical results. We further show how recovery via 1,1 changes as the number of measurements increases, and propose a boosted-1 approach to improve on the 1,1 approach. This analysis provides the starting point for our study of the recovery properties of ReMBo, based on a geometrical interpretation of this algorithm.
We begin in Section 2 by summarizing existing 0 -1 equivalence results, which give conditions under which the solution of the 1 relaxation (1.2) coincides with the solution of the 0 problem (1.1). In Section 3 we consider the 1,2 mixed-norm and sum-of-norms formulations and compare their performance against 1,1 . In Sections 4 and 5 we examine two approaches that are based on sequential application of (1.2).
Notation. We assume throughout that A is a full-rank matrix in R m×n , and that X 0 is an s row-sparse matrix in R n×r . We follow the convention that all vectors are column vectors. For an arbitrary matrix M , its jth column is denoted by the column vector M ↓j ; its ith row is the transpose of the column vector M i→ . The ith entry of a vector v is denoted by v i . We make exceptions for e i = I ↓i and for x 0 (resp., X 0 ), which represents the sparse vector (resp., matrix) we want to recover. When there is no ambiguity we sometimes write m i to denote M ↓i . When concatenating vectors into matrices, [a, b, c] denotes horizontal concatenation and [a; b; c] denotes vertical concatenation. When indexing with I, we define the vector v I := [v i ] i∈I , and the m × |I| matrix A I := [A ↓j ] j∈I . Row or column selection takes precedence over all other operators.
The conditions under which (1.2) gives the sparsest possible solution have been studied by applying a number of different techniques. By far the most popular analytical approach is based on the restricted isometry property, introduced by Candès and Tao [3], which gives sufficient conditions for equivalence. Donoho [9] obtains necessary and sufficient (NS) conditions by analyzing the underlying geometry of (1.2). Several authors [12,13,19] characterize the NS-conditions in terms of properties of the kernel of A:
Fuchs [16] and Tropp [27] express sufficient conditions in terms of the solution of the dual of (1.2): maximize
In this paper we are mainly concerned with the geometric
…(Full text truncated)…
This content is AI-processed based on ArXiv data.