Guaranteed Minimum Rank Approximation from Linear Observations by Nuclear Norm Minimization with an Ellipsoidal Constraint

The rank minimization problem is to find the lowest-rank matrix in a given set. Nuclear norm minimization has been proposed as an convex relaxation of rank minimization. Recht, Fazel, and Parrilo have shown that nuclear norm minimization subject to a…

Authors: ** - 원저자: *Recht, Fazel, Parrilo* (원 논문) - 현재 분석 논문 저자: (논문 본문에 명시되지 않음) → **저자 정보 미제공** **

Guaran teed Minim um Rank Appro ximatio n from Linear Observ ations b y Nuclear Norm Minimization with an Ellipsoidal Const rain t Kiryung Lee and Y oram Bresler Marc h 17, 2022 Abstract The rank minimization problem is to fin d the low est-rank matrix in a giv en set. Nuclear norm minimization has b een prop osed as an conv ex relaxation of rank minimization. Rech t, F azel, and P arrilo hav e sho wn that n uclear norm minimizatio n sub ject to an affine constrain t is equiv- alen t to ran k minimization under a certain condition given in terms of the rank-restricted isometry prop erty . How ever, in the presence of mea- surement noise, or with only approximately lo w rank generative mod el, the appropriate constraint set is an elli psoid rather than an affine space. There exist polynomial-time algorithms to solv e t h e n uclear norm mini- mization with an ellipsoidal constraint, but no performance guarantee has b een shown for these algorithms. In this paper, w e derive suc h an explicit p erformance guarantee, b o un ding the error in the approximate solution provided b y nuclear norm minimization with an ellipsoidal constrain t. 1 1 In tro du ction The r a nk minimization problem is to find the lowest r ank matrix in a given set C [FHB0 1], i.e., min X ∈ C m × n rank( X ) sub ject to X ∈ C . (1) In particular, there ar e a pplications such as matrix completion and minim um order system ident ificatio n 1 that re q uire the r econstruction of low-rank matrix X ∈ C m × n from the linea r measurement b = A X ∈ C p obtained with a given linear op era tor A : C m × n → C p . In this case, the s et C is given as an affine space by C = { X : A X = b } and we ar e solving an inv ers e pr oblem A X = b for X with the a priori infor mation that the true solution is a low-rank matrix . In general, rank minimization is a difficult non-convex optimization pr o blem and no p oly nomial time a lgorithm has b een prop osed to da te. Nuclear norm minimization [FHB0 1] is a co nv ex relax ation of the rank minimization pro blem with a convex set C . Rech t, F azel, and Parrilo derived a p erforma nce guarantee for nuclear norm minimization with a n affine c onstraint [RFP07]. A sufficient condition for the p erforma nce g uarantee is given in terms of the rank-r estricted isometry prop er ty of the linear oper ator A . Roughly , when A is nearly an isometry for lo w-r ank matrices, rank minimization is equiv a lent to n uclea r norm minimization and hence can b e solv ed in polynomial time. How ever, in some applica tions of r ank minimization such as minimum order system approximation, reduced or der controller design, and the Euclidean dis- tance matrix problem [LLR95], the in verse problem A X = b with given linea r op erator A and measurement b may not a dmit a low-rank solutio n. F or example, in minim um order system approximation, the given system cannot b e describ ed 1 F or more applications of r ank mi nimization with an affine constraint, see [RFP07], [F az02], and the reference therein. 2 by a low-rank matrix but can b e well approximated by one. In this case, the min- im um r a nk of solutio ns to A X = b , which is given by min X { rank( X ) : A X = b } , can b e higher than the des ired target rank. Another p ossibility is that there is additive noise in the measurements. Again the in verse pr oblem A X = b may not admit a low-rank solution. Ins tea d, in order to find a low-rank approximate solution whose r ank is low er than the tar get v alue r equired b y the application, the set C c an b e modified to an ellipsoid g iven by C = { X : kA X − b k 2 6 ǫ } . (2) The resulting rank minimization problem de fined b y (1) and (2) is hard, and its nuclear norm co nv ex rela xation ca n be used to obtain approximate solu- tions. In fact, there exist p olyno mial-time alg orithms to solve the nuclear norm minimization problem with an ellipso idal constraint (e.g. [FH B01 ], [CCS08 ]). How ever, while empirically effective, a theor etical p er formance gua rantee for those algor ithms has b een missing. Our goal in this pap er is to clos e this gap in theory . W e ar e mo tiv ated by the ana lo gy established by Rech t, F a zel, and Par- rilo [RFP07] b etw een the rank minimization problem and ℓ 0 norm minimiza tion, or equiv alently compressed sensing, for the affine co ns traint case. This analo gy extends to the con vex relax ations of these pro blems, n uclear norm minimizatio n and ℓ 1 norm minimization, r esp ectively [RFP 0 7]. F or the affine cons traint c a se, Candes and T ao [CT05] have g iven a sufficient condition for the equiv alence of ℓ 0 norm minimization to its ℓ 1 relaxatio n (or ba - sis pur suit) in the sense that b oth problems admit the same and unique solution. The condition is g iven in terms of the sparsity-restricted isometry prop erty of the sensing matrix. F or the ellipsoida l constra int case, also kno wn as the noisy and compressible signa l case, Candes extended the p erformance g ua rantee of ℓ 1 norm minimization showing that the error in the sparse approximate solutio n is 3 bo unded b y a weigh ted sum of the best sparse a pproximation erro r o f the true solution and a b ound o n the energy of the noise in the measurement [Ca n08]. An analogous p erformance guar antee for nuclear no rm minimization with an ellipsoidal co nstraint has not been a v ailable to da te. In this pa p er, w e seek the r e la tion b etw een the rank minimization pro blem with a n ellipsoidal constraint and its co nv ex relaxation. Ba sically , we use an analogue o f the approach b y Candes for ℓ 1 norm minimizatio n [Can0 8]. The extended p erforma nce guarantee is given in terms of the rank-res tr icted isom- etry prop erty and b ounds the e r ror in the low-rank a pproximate solution by a weighted s um o f the er ror in the best low-rank approximation of the true solution, and a bo und on the ener gy of the measurement noise. 2 P erformance G uaran tee Consider tw o Hilbert spaces C m × n and C p . F o r X, Y ∈ C m × n , the inner prod- uct is defined by h X , Y i C m × n = T r( Y H X ), whe r e Y H denotes the Hermitia n transp ose of Y . Then the induced Hilb ert-Schmidt norm on C m × n is the F rob e- nius norm a nd will b e denoted b y k·k F . F or x, y ∈ C p , the inner pro duct is defined by h x, y i C p = y H x . Then the induced Hilb e rt-Schmidt norm of C p is the Euclidea n norm a nd will b e denoted by k·k 2 . The setting for the low-rank matrix recovery and a pproximation pr oblem is the following. The measurement (with p erturbation) of an unknown matrix X ∈ C m × n is given as b = A X + ν with k ν k 2 6 ǫ where A : C m × n → C p is a given linear op erator . The inv erse problem is to recover matrix X , which is considered as an unknown true solutio n, with the side information that X has low r ank or can can b e well-approximated by such a matr ix . Acco rdingly , the problem may be formulated as in (1), with the ellipso idal cons tr aint (2 ). The 4 conv ex relaxa tio n o f this problem is the n uclea r norm minimization problem P: min X ∈ C m × n k X k ∗ sub ject to kA X − b k 2 6 ǫ, where k X k ∗ denotes the nuclear norm, which is the s um of the singular v alues of X . Problem P admits a low-rank solutio n that is a low-rank approximate solution to the original in verse problem. The quality of this approximate solu- tion can be guaranteed sub ject to a co nditio n o n the r ank-r est ricte d isometry c onst ant . Given a linea r oper ator A : C m × n → C p , the rank-r estricted isometry con- stant δ r ( A ) is defined a s the minim um constant that sa tisfies (1 − δ r ( A )) k X k 2 F 6 kA X k 2 2 6 (1 + δ r ( A )) k X k 2 F , (3) for all X ∈ C m × n with ra nk( X ) 6 r . 2 Theorem 2.1 L et X ⋆ b e the solution to P. If A has the r ank-r estricte d isometry c onst ant δ 3 r ( A ) < 1 / (1 + 4 / √ 3) , then k X ⋆ − X k F 6 K 0 k X − X r k F + K 1 ǫ, (4) wher e X r denotes the b est r ank- r appr oximation of X given by X r , ar g min Z ∈ C m × n {k X − Z k F : r ank( Z ) 6 r } . 2 The definition of the r ank-r estricted isometry prop erty is sli gh tly different from that in [RFP07] in the sense that the norms in the inequality are squared in our definition. Thi s is done f or the consistency with the sparsi t y-r estricted isometry for ℓ 0 norm minimi zation [Can08]. 5 The c onstants K 0 and K 1 ar e given as K 0 = 4 √ 2 √ 3 ! (1 + ( √ 2 − 1) δ 3 r ( A )) 1 − (1 + 4 / √ 3) δ 3 r ( A ) K 1 = √ 3 + 2 √ 2 √ 3 ! 2 p 1 + δ 3 r ( A ) 1 − (1 + 4 / √ 3) δ 3 r ( A ) , r esp e ctively. The tw o terms k X − X r k F and ǫ in the b ound o f (4) reflect the compres s- ibilit y of matrix X , and the str ength of the mea surement noise, resp ectively . In general X may not be exactly low-rank with rank( X ) 6 r but X admits a g o o d low-rank approximation with small k X − X r k F . The measur ements ar e also sub ject to a pertur bation. These imperfections cause an err o r in the low-rank approximate solutio n obtained by the nuclear norm minimization. How ever, the gain of each term is explicitly bounded b y a constant deter mined b y the ra nk- restricted isometry constant. In particular, when ǫ = 0 and k X − X r k F = 0, the solution obtained by the n uclea r norm minimization coinc ides with the true solution X . F urthermore, an immediate coro llary of Theorem 2 .1 states that rank minimization with an ellipso idal constr aint can be so lved by n uclear norm minimization in p olyno mial time in the s ense that the distance betw een the solu- tions obtained b y rank minimization and nuclear norm minimization is b ounded as a linea r function of k X − X r k F and ǫ . 3 Pro of of P erformance Guaran tee W e firs t no te the r ank-r estricte d ortho gonality pr op erty that follows from the rank-r e s tricted isometr y pr o p erty . The following Pro p o sition is a n extension o f Lemma 2.1 in [Can08] for the v ector cas e to the ma trix case. Definition 3 .1 Given a set Ψ = { ψ 1 , . . . , ψ | Ψ | } ⊂ C m × n , define a line ar op er- 6 ator L Ψ : C | Ψ | → C m × n by L Ψ α = | Ψ | X k =1 α k ψ k , ∀ α ∈ C | Ψ | . (5) It follows fro m (5) that the adjoint op er ator L ∗ Ψ : C m × n → C | Ψ | is g iven by ( L ∗ Ψ X ) k = h X , ψ k i C m × n , ∀ k = 1 , . . . , | Ψ | , ∀ X ∈ C m × n . (6) Note that for A : C m × n → C p the op era tor compos ition A L Ψ : C | Ψ | → C p admits a matrix representation. Its pse udo -inv erse is denoted b y [ A L Ψ ] † . Remark 3. 2 If the elements in Ψ ar e p airwise ortho gonal and normalize d, then L Ψ is an isometry and the pr oje ction P Ψ onto span(Ψ) is given by P Ψ = L Ψ L ∗ Ψ . If Ψ is a set of r ank-one matric es, then rank( L Ψ α ) 6 | Ψ | for al l α ∈ C | Ψ | . Prop ositi on 3.3 Su pp ose that line ar op er ator A : C m × n → C p has the r ank- r estricte d isometry c onstant δ r ( A ) . F or X ∈ C m × n , let X = P rank( X ) j =1 σ j ψ j denote t he singular value de c omp osition of X wher e ψ j is a r ank-one unit-n orm matrix obtaine d by the outer pr o duct of left and right singular ve ctors c orr e- sp onding to the j -th singular value σ j for j = 1 , . . . , rank( X ) . Similarly, for Y ∈ C m × n , let Y = P rank( Y ) j =1 σ ′ j ψ ′ j denote the singular value de c omp osition of Y . If h ψ j , ψ ′ k i C m × n = 0 for al l j = 1 , . . . , rank( X ) and k = 1 , . . . , r ank( Y ) and rank( X ) + rank( Y ) 6 r , then |hA X , A Y i C p | 6 δ r ( A ) k X k F k Y k F . (7) Pro of Let Ψ = { ψ j } rank( X ) j =1 , Ψ ′ = { ψ ′ j } rank( Y ) j =1 , and e Ψ = Ψ ∪ Ψ ′ . The n L Ψ , L Ψ ′ , 7 and L e Ψ are isometries. Therefore, tog e ther with the R-RIP of A , it follows that 1 − δ r ( A ) 6 σ min ( L ∗ e Ψ A ∗ A L e Ψ ) 6 σ max ( L ∗ e Ψ A ∗ A L e Ψ ) 6 1 + δ r ( A ) . Note that L ∗ Ψ ′ A ∗ A L Ψ is a n o ff-diagona l submatrix of L ∗ e Ψ A ∗ A L e Ψ , and there- fore also o f L ∗ e Ψ A ∗ A L e Ψ − I d , where I d is the iden tity matrix of compatible size. Hence σ max ( L ∗ Ψ ′ A ∗ A L Ψ ) 6 σ max ( L ∗ e Ψ A ∗ A L e Ψ − I d ) 6 max { (1 + δ r ( A )) − 1 , 1 − (1 − δ r ( A )) } = δ r ( A ) . Noting that X = P Ψ X = L Ψ L ∗ Ψ X and Y = P Ψ ′ Y = L Ψ ′ L ∗ Ψ ′ Y , it follows |hA X, A Y i C p | = |hA L Ψ L ∗ Ψ X , A L Ψ ′ L ∗ Ψ ′ Y i C p | = |h [ L ∗ Ψ ′ A ∗ A L Ψ ] L ∗ Ψ X , L ∗ Ψ ′ Y i C | Ψ ′ | | 6 σ max ( L Ψ ′ A ∗ A L Ψ ) k L ∗ Ψ X k 2 k L Ψ ′ Y k 2 6 δ r ( A ) k L ∗ Ψ X k 2 k L ∗ Ψ ′ Y k 2 = δ r ( A ) k L Ψ L ∗ Ψ X k F k L Ψ ′ L ∗ Ψ ′ Y k F = δ r ( A ) k X k F k Y k F . Next we no te cer tain prop er ties of the n uclear no rm. Lemma 3.4 L et X, Y ∈ C m × n . Then k X + Y k ∗ = k X k ∗ + k Y k ∗ if and only if X and Y ar e simultane ou s ly diagonalizable into nonne gative matric es. Pro of Let Γ m and Γ n denote the s ets of unitary matrices in C m × m and C n × n , 8 resp ectively . By the v aria tional principle, k X k ∗ = max U ∈ Γ m ,V ∈ Γ n T r ( U H X V ) . (8) Moreov er , ( U, V ) is a maximizer of (8) if and only if U H X V is a dia gonal matrix where the diagonal entries ar e sing ular v alues of X . Equa tion (8) implies k X + Y k ∗ = max U ∈ Γ m ,V ∈ Γ n T r ( U H ( X + Y ) V ) (9) 6 max U ∈ Γ m ,V ∈ Γ n T r ( U H X V ) + m ax U ∈ Γ m ,V ∈ Γ n T r ( U H Y V ) (10) = k X k ∗ + k Y k ∗ . Let ( U 0 , V 0 ) denotes a maximizer o f (9). The equa lity in (10 ) holds if and only if both U H 0 X V 0 and U H 0 Y V 0 are diagonal matrices and the diagona l en tries of U H 0 X V 0 and U H 0 Y V 0 corres p o nd to the s ingular v alues of X and Y , resp ectively . Noting that the singular v a lues are nonnega tive completes the pr o of. Corollary 3.5 (Lemma 2.3 in [RFP07]) L et X, Y ∈ C m × n . If X Y H = 0 and X H Y = 0 , then k X + Y k ∗ = k X k ∗ + k Y k ∗ . Pro of Let X = U 1 Σ 1 V H 1 and Y = U 2 Σ 2 V H 2 denote the singular v alue decom- po sitions of X and Y , r e s p ectively . The assumption implies that V H 1 V 2 = 0 and U H 1 U 2 = 0. Let U = [ U 1 U 2 ] and V = [ V 1 V 2 ]. B y conca tenating orthonomal columns to U a nd V , we construct unitary matrices e U and e V whic h hav e U and V as their submatrices, resp ectively . Then ( e U , e V ) simultaneously diagonalize X and Y into nonnegative matrices and hence the res ult follows by Lemma 3.4. 9 Lemma 3.6 L et X ∈ C m × n , and s upp ose rank( X ) 6 r . Then k X k F 6 k X k ∗ 6 r 1 / 2 k X k F . Pro of Let ( σ k ) r k =1 denotes the singular v alues of X in decreasing or der where r is the r ank of X . Since σ k > 0 fo r all k = 1 , . . . , r , v u u t r X k =1 σ 2 k 6 r X k =1 σ k 6 r 1 / 2 v u u t r X k =1 σ 2 k , where the second inequality follows from the Cauch y- Schw artz inequalit y . Not- ing k X k 2 F = P r k =1 σ 2 k and k X k ∗ = P r k =1 σ k completes the proof. Pro of of Theorem 2.1 Let X = U Σ V H denote the full singular v alue dec o mp osition of X , whe r e U ∈ C m × m , Σ ∈ C m × n , V ∈ C n × n . Let u k , v k denote the k -th column of U and V , res p ec tively . Then define four pro jection op era tors in ter ms o f the u k ’s a nd v k ’s: P 1 Z = r X j =1 r X k =1 h Z, u j v H k i C m × n u j v H k P 2 Z = m X j = r +1 r X k =1 h Z, u j v H k i C m × n u j v H k P 3 Z = r X j =1 n X k = r +1 h Z, u j v H k i C m × n u j v H k P 4 Z = m X j = r +1 n X k = r +1 h Z, u j v H k i C m × n u j v H k . Obviously , P 1 + P 2 + P 3 + P 4 = I , where I is the identit y op erator on C m × n . Also, X r = P 1 X . By construction, ( P 1 Z )( P 4 Z ) H = 0 and ( P 1 Z ) H ( P 4 Z ) = 0 10 for all Z ∈ C m × n . Then Corollary 3.5 implies k ( P 1 + P 4 ) Z k ∗ = k P 1 Z k ∗ + k P 4 Z k ∗ ∀ Z ∈ C m × n . (11) Also note that rank( P k Z ) 6 r for all Z ∈ C m × n and for k = 1 , 2 , 3. Let E = X ⋆ − X a nd let P 4 E = P j > 1 e σ j e u j e v H j be the sing ular v alue de- comp osition of P 4 E with s ingular v alues in decreasing or der. Here e σ j = 0 if i > rank( P 4 E ). F or k > 1, define pro jection op era tor Q k by Q k Z = kr X j =( k − 1) r +1 h Z, e u j e v H j i C m × n e u j e v H j . Then we hav e P 4 E = P k > 1 Q k E and r ank( Q k E ) 6 r for all k > 1. Now, fo r all k > 2, we hav e k Q k E k F 6 r 1 / 2 k Q k E k 2 6 r − 1 / 2 k Q k − 1 E k ∗ and therefore X k > 2 k Q k E k F 6 r − 1 / 2 X k > 1 k Q k E k ∗ = r − 1 / 2 k P 4 E k ∗ . (12) It follows that k P 4 E − Q 1 E k F = k X k > 2 Q k E k F 6 X k > 2 k Q k E k F 6 r − 1 / 2 k P 4 E k ∗ . (13) 11 Next, since X ⋆ is a so lution to P, k X k ∗ > k X ⋆ k ∗ = k X + E k ∗ = k ( P 1 + P 2 + P 3 + P 4 )( X + E ) k ∗ > k ( P 1 + P 4 )( X + E ) k ∗ − k ( P 2 + P 3 )( X + E ) k ∗ = k P 1 ( X + E ) k ∗ + k P 4 ( X + E ) k ∗ − k ( P 2 + P 3 )( X + E ) k ∗ > k P 1 X k ∗ − k P 1 E k ∗ + k P 4 E k ∗ − k P 4 X k ∗ − k ( P 2 + P 3 ) X k ∗ − k ( P 2 + P 3 ) E k ∗ , where the equalit y in the third line follo ws fro m (11). Therefore k P 4 E k ∗ 6 k P 1 E k ∗ + k ( P 2 + P 3 ) E k ∗ | {z } (a) + k X k ∗ − k P 1 X k ∗ + k ( P 2 + P 3 ) X k ∗ + k P 4 X k ∗ | {z } (b) . Lemma 3.7 L et α > 0 b e a c onstant and let x, y ∈ R satisfy x 2 + y 2 = 1 . The n x + αy 6 2 α/ √ α 2 + 1 . Pro of Let ( x 0 , y 0 ) = arg max ( x,y ) { x + αy : x 2 + y 2 = 1 } . W e ma y assume x 0 , y 0 > 0 and then y 0 = p 1 − x 2 0 . Let f ( x ) = x + α √ 1 − x 2 . Then d f ( x ) dx    x = x 0 = 1 + αx √ 1 − x 2 = 0. Therefore the max imum 2 α/ √ α 2 + 1 is achieved when x 0 = α/ √ α 2 + 1 and y 0 = 1 / √ α 2 + 1. Define a constant γ , 2 √ 2 √ 3 . W e further bound (a) by k P 1 E k ∗ + k ( P 2 + P 3 ) E k ∗ 6 r 1 / 2 k P 1 E k F + (2 r ) 1 / 2 k ( P 2 + P 3 ) E k F 6 γ r 1 / 2 k ( P 1 + P 2 + P 3 ) E k F , where the first inequality follows from Lemma 3 .6 with ra nk( P 1 E ) 6 r and rank(( P 2 + P 3 ) E ) 6 2 r and the second inequality is o btained b y in voking Lemma 3.7 with x = k P 1 E k F / k ( P 1 + P 2 + P 3 ) E k F , y = k ( P 2 + P 3 ) E k F / k ( P 1 + P 2 + P 3 ) E k F , and α = √ 2. 12 Next we further bo und ( b ) b y k X k ∗ − k P 1 X k ∗ + k ( P 2 + P 3 ) X k ∗ + k P 4 X k ∗ 6 k P 1 X k ∗ + k P 4 X k ∗ + k ( P 2 + P 3 ) X k ∗ − k P 1 X k ∗ + k ( P 2 + P 3 ) X k ∗ + k P 4 X k ∗ = 2 k ( P 2 + P 3 ) X k ∗ + 2 k P 4 X k ∗ 6 2(2 r ) 1 / 2 k ( P 2 + P 3 ) X k F + 2 r 1 / 2 k P 4 X k F 6 2 γ r 1 / 2 k ( P 2 + P 3 + P 4 ) X k F = 2 γ r 1 / 2 k X − X r k F . Therefore k P 4 E k ∗ 6 γ r 1 / 2 k ( P 1 + P 2 + P 3 ) E k F + 2 γ r 1 / 2 k X − X r k F 6 γ r 1 / 2 k ( P 1 + P 2 + P 3 ) E + Q 1 E k F + 2 γ r 1 / 2 k X − X r k F . (14) Here we used the fact that b y constructio n Q 1 is orthog onal to P 1 , P 2 , and P 3 . Combining (13) and (14), we hav e k P 4 E − Q 1 E k F 6 γ k ( P 1 + P 2 + P 3 ) E + Q 1 E k F + 2 γ k X − X r k F . (15) Next w e b ound k E − ( P 4 E − Q 1 E ) k F = k ( P 1 + P 2 + P 3 ) E + Q 1 E k F . Since rank(( P 1 + P 2 + P 3 ) E ) 6 2 r and ra nk( Q 1 E ) 6 r , by the subadditivit y of rank, rank( E − ( P 4 E − Q 1 E )) 6 3 r . Since P 4 E − Q 1 E = P k > 2 Q k E kA ( E − ( P 4 E − Q 1 E )) k 2 2 = hA ( E − ( P 4 E − Q 1 E )) , A E i C p | {z } (c) − hA ( E − ( P 4 E − Q 1 E )) , A X k > 2 Q k E i C p | {z } (d) . (16) 13 W e bound (c) b y hA ( E − ( P 4 E − Q 1 E )) , A E i C p 6 kA ( E − ( P 4 E − Q 1 E )) k 2 kA E k 2 6 2 ǫ p 1 + δ 3 r ( A ) k E − ( P 4 E − Q 1 E ) k F . (17) Here we us e d the r ank-res tricted isometry prop erty o f A with rank( E − ( P 4 E − Q 1 E )) 6 3 r and the fact that kA E k 2 = kA ( X ⋆ − X ) k 2 6 k b − A X ⋆ k 2 + k b − A X k 2 6 2 ǫ, bec ause X ⋆ is a solution to P. Next we b ound (d) . F o r each k > 2, |hA ( E − ( P 4 E − Q 1 E )) , A Q k E i C p | = |hA (( P 1 + P 2 + P 3 ) E + Q 1 E ) , A Q k E i C p | 6 |hA ( P 1 E + Q 1 E ) , A Q k E i C p | + |hA ( P 2 + P 3 ) E , A Q k E i C p | 6 δ 3 r ( A ) k P 1 E + Q 1 E k F k Q k E k F + δ 3 r ( A ) k ( P 2 + P 3 ) E k F k Q k E k F 6 √ 2 δ 3 r ( A ) k ( P 1 + P 2 + P 3 ) E + Q 1 E k F k Q k E k F = √ 2 δ 3 r ( A ) k E − ( P 4 E − Q 1 E ) k F k Q k E k F , (18) where the second inequality follows from P rop os itio n 3.3 b ecause Q k P j = 0, Q k Q 1 = 0 for j = 1 , 2 , 3 and k > 2 and these pro jections ar e defined by pairwise orthogo nal rank -one matrices. 14 Applying (17) and (18) to (1 6), we hav e kA ( E − ( P 4 E − Q 1 E )) k 2 2 6 k E − ( P 4 E − Q 1 E ) k F   2 ǫ p 1 + δ 3 r ( A ) + √ 2 δ 3 r ( A ) X k > 2 k Q k E k F   6 k E − ( P 4 E − Q 1 E ) k F  2 ǫ p 1 + δ 3 r ( A ) + √ 2 δ 3 r ( A ) r − 1 / 2 k P 4 E k ∗  , (19) where the second ineq uality follows fro m (12). F ro m the r ank-res tricted is ometry pro p erty of A , kA ( E − ( P 4 E − Q 1 E )) k 2 2 > (1 − δ 3 r ( A )) k E − ( P 4 E − Q 1 E ) k 2 F . (20) Combining (19) and (20), we obtain k E − ( P 4 E − Q 1 E ) k F 6 αǫ + ρr − 1 / 2 k P 4 E k ∗ , where α = 2 p 1 + δ 3 r ( A ) 1 − δ 3 r ( A ) ρ = √ 2 δ 3 r ( A ) 1 − δ 3 r ( A ) . Using (14) and the fact that Q 1 is orthog onal to P 1 , P 2 , and P 3 , k E − ( P 4 E − Q 1 E ) k F 6 αǫ + γ ρ k ( P 1 + P 2 + P 3 ) E k F + 2 γ ρ k X − X r k F 6 αǫ + γ ρ k ( P 1 + P 2 + P 3 ) E + Q 1 E k F + 2 γ ρ k X − X r k F = αǫ + γ ρ k E − ( P 4 E − Q 1 E ) k F + 2 γ ρ k X − X r k F . T o pro ceed we us e the ass umption δ 3 r ( A ) < 1 1+4 / √ 3 , which implies 1 − γ ρ > 15 0, hence k E − ( P 4 E − Q 1 E ) k F 6 (1 − γ ρ ) − 1 ( αǫ + 2 γ ρ k X − X r k F ) . Finally , k E k F 6 k E − ( P 4 E − Q 1 E ) k F + k P 4 E − Q 1 E k F 6 (1 + γ ) k E − ( P 4 E − Q 1 E ) k F + 2 γ k X − X r k F 6 (1 − γ ρ ) − 1 [(1 + γ ) αǫ + 2 γ (1 + ρ ) k X − X r k F ] , where the second ineq uality follows fro m (15). 4 Conclusion In this pap er, we derived a n extended p erforma nce guarantee of nuclear norm minimization with an ellipsoidal constraint. Unlik e existing perfor mance g uar- antee, this co ns traint accommo dates pr oblem f or mulation in which the ma- trix is only appr oximately low rank, or in which there is noise in the mea- surements. Th e condition for the p erfor mance guar a ntee is given in ter ms of the rank-res tricted isometr y prop er ty of the linear op erator in the constraint. The new p erfor mance gua r antee in this pap er ensur es the quality of a low-rank approximate solution o btained by nuclear norm minimization with an ellip- soidal co nstraint. Such an approximate solution can b e fo und by using existing po lynomial-time a lgorithms. 16 References [Can08] E.J. Candes. The restricted isometry pro p e rty and its implications for compressed se nsing. Comptes r endus-Math´ ematique , 346(9-10):589 – 592, 200 8. [CCS08] J .F. Cai, E.J. Candes, and Z. Shen. A sing ular v a lue thre sholding al- gorithm for ma trix completion. Arxiv pr eprint arXiv:081 0.3286 , 2008. [CT05] E.J. Candes a nd T. T ao . Decoding by linear programming. IEEE T r ansactions on In formation The ory , 51(12):4203 –421 5, 20 05. [F az 0 2] M. F azel. Matrix r ank minimization with applic ations . PhD thesis, Stanford Universit y , 20 02. [FHB01] M. F a zel, H. Hindi, and S.P . Boyd. A rank minimization heuristic with application to minim um order system approximation. In Pr o c e e dings Americ an Contr ol Confer en c e , volume 6 , pag es 4734 – 4739 , 2 001. [LLR95] N. Linial, E. London, and Y. Rabinovic h. The geometry of gra phs and s ome of its algorithmic applications. Combinatoric a , 15(2):215– 245, 199 5. [RFP07] B. Rec ht, M. F azel, and P .A. P arrilo . Guar anteed minimum -r ank solutions of linear matrix equations via n uclear nor m minimization. Arxiv pr eprint arXiv:07 06.4138 , 200 7. 17

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment