Robust Matrix Decomposition with Outliers
Suppose a given observation matrix can be decomposed as the sum of a low-rank matrix and a sparse matrix (outliers), and the goal is to recover these individual components from the observed sum. Such additive decompositions have applications in a variety of numerical problems including system identification, latent variable graphical modeling, and principal components analysis. We study conditions under which recovering such a decomposition is possible via a combination of $\ell_1$ norm and trace norm minimization. We are specifically interested in the question of how many outliers are allowed so that convex programming can still achieve accurate recovery, and we obtain stronger recovery guarantees than previous studies. Moreover, we do not assume that the spatial pattern of outliers is random, which stands in contrast to related analyses under such assumptions via matrix completion.
💡 Research Summary
The paper addresses the problem of decomposing an observed data matrix (M) into a low‑rank component (L_0) and a sparse outlier component (S_0), i.e., (M = L_0 + S_0). This formulation underlies many applications such as system identification, latent‑variable graphical modeling, and robust principal component analysis (PCA). The authors propose to recover the two components via a convex program that combines the nuclear (trace) norm, which promotes low rank, and the (\ell_1) norm, which promotes sparsity:
\
Comments & Academic Discussion
Loading comments...
Leave a Comment