The rank minimization problem is to find the lowest-rank matrix in a given set. Nuclear norm minimization has been proposed as an convex relaxation of rank minimization. Recht, Fazel, and Parrilo have shown that nuclear norm minimization subject to an affine constraint is equivalent to rank minimization under a certain condition given in terms of the rank-restricted isometry property. However, in the presence of measurement noise, or with only approximately low rank generative model, the appropriate constraint set is an ellipsoid rather than an affine space. There exist polynomial-time algorithms to solve the nuclear norm minimization with an ellipsoidal constraint, but no performance guarantee has been shown for these algorithms. In this paper, we derive such an explicit performance guarantee, bounding the error in the approximate solution provided by nuclear norm minimization with an ellipsoidal constraint.
Deep Dive into Guaranteed Minimum Rank Approximation from Linear Observations by Nuclear Norm Minimization with an Ellipsoidal Constraint.
The rank minimization problem is to find the lowest-rank matrix in a given set. Nuclear norm minimization has been proposed as an convex relaxation of rank minimization. Recht, Fazel, and Parrilo have shown that nuclear norm minimization subject to an affine constraint is equivalent to rank minimization under a certain condition given in terms of the rank-restricted isometry property. However, in the presence of measurement noise, or with only approximately low rank generative model, the appropriate constraint set is an ellipsoid rather than an affine space. There exist polynomial-time algorithms to solve the nuclear norm minimization with an ellipsoidal constraint, but no performance guarantee has been shown for these algorithms. In this paper, we derive such an explicit performance guarantee, bounding the error in the approximate solution provided by nuclear norm minimization with an ellipsoidal constraint.
The rank minimization problem is to find the lowest rank matrix in a given set C [FHB01], i.e., min X∈C m×n rank(X) subject to X ∈ C.
(1)
In particular, there are applications such as matrix completion and minimum order system identification1 that require the reconstruction of low-rank matrix X ∈ C m×n from the linear measurement b = AX ∈ C p obtained with a given linear operator A : C m×n → C p . In this case, the set C is given as an affine space by C = {X : AX = b} and we are solving an inverse problem AX = b for X with the a priori information that the true solution is a low-rank matrix.
In general, rank minimization is a difficult non-convex optimization problem and no polynomial time algorithm has been proposed to date. Nuclear norm minimization [FHB01] is a convex relaxation of the rank minimization problem with a convex set C. Recht, Fazel, and Parrilo derived a performance guarantee for nuclear norm minimization with an affine constraint [RFP07]. A sufficient condition for the performance guarantee is given in terms of the rank-restricted isometry property of the linear operator A. Roughly, when A is nearly an isometry for low-rank matrices, rank minimization is equivalent to nuclear norm minimization and hence can be solved in polynomial time.
However, in some applications of rank minimization such as minimum order system approximation, reduced order controller design, and the Euclidean distance matrix problem [LLR95], the inverse problem AX = b with given linear operator A and measurement b may not admit a low-rank solution. For example, in minimum order system approximation, the given system cannot be described by a low-rank matrix but can be well approximated by one. In this case, the minimum rank of solutions to AX = b, which is given by min X {rank(X) : AX = b}, can be higher than the desired target rank. Another possibility is that there is additive noise in the measurements. Again the inverse problem AX = b may not admit a low-rank solution. Instead, in order to find a low-rank approximate solution whose rank is lower than the target value required by the application, the set C can be modified to an ellipsoid given by
(2)
The resulting rank minimization problem defined by (1) and ( 2) is hard, and its nuclear norm convex relaxation can be used to obtain approximate solutions. In fact, there exist polynomial-time algorithms to solve the nuclear norm minimization problem with an ellipsoidal constraint (e.g. [FHB01], [CCS08]).
However, while empirically effective, a theoretical performance guarantee for those algorithms has been missing. Our goal in this paper is to close this gap in theory. We are motivated by the analogy established by Recht, Fazel, and Parrilo [RFP07] between the rank minimization problem and ℓ 0 norm minimization, or equivalently compressed sensing, for the affine constraint case. This analogy extends to the convex relaxations of these problems, nuclear norm minimization and ℓ 1 norm minimization, respectively [RFP07].
For the affine constraint case, Candes and Tao [CT05] have given a sufficient condition for the equivalence of ℓ 0 norm minimization to its ℓ 1 relaxation (or basis pursuit) in the sense that both problems admit the same and unique solution.
The condition is given in terms of the sparsity-restricted isometry property of the sensing matrix. For the ellipsoidal constraint case, also known as the noisy and compressible signal case, Candes extended the performance guarantee of ℓ 1 norm minimization showing that the error in the sparse approximate solution is bounded by a weighted sum of the best sparse approximation error of the true solution and a bound on the energy of the noise in the measurement [Can08].
An analogous performance guarantee for nuclear norm minimization with an ellipsoidal constraint has not been available to date.
In this paper, we seek the relation between the rank minimization problem with an ellipsoidal constraint and its convex relaxation. Basically, we use an analogue of the approach by Candes for ℓ 1 norm minimization [Can08]. The extended performance guarantee is given in terms of the rank-restricted isometry property and bounds the error in the low-rank approximate solution by a weighted sum of the error in the best low-rank approximation of the true solution, and a bound on the energy of the measurement noise.
Consider two Hilbert spaces C m×n and C p . For X, Y ∈ C m×n , the inner product is defined by X, Y C m×n = Tr(Y H X), where Y H denotes the Hermitian transpose of Y . Then the induced Hilbert-Schmidt norm on C m×n is the Frobenius norm and will be denoted by • F . For x, y ∈ C p , the inner product is defined by x, y C p = y H x. Then the induced Hilbert-Schmidt norm of C p is the Euclidean norm and will be denoted by • 2 .
The setting for the low-rank matrix recovery and approximation problem is the following. The measurement (with perturbation) of an unknown matrix
a given linear operator. The inverse problem i
…(Full text truncated)…
This content is AI-processed based on ArXiv data.