Truncated Nuclear Norm Minimization for Image Restoration Based On Iterative Support Detection
Recovering a large matrix from limited measurements is a challenging task arising in many real applications, such as image inpainting, compressive sensing and medical imaging, and this kind of problems are mostly formulated as low-rank matrix approximation problems. Due to the rank operator being non-convex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation and the low-rank matrix recovery problem is solved through minimization of the nuclear norm regularized problem. However, a major limitation of nuclear norm minimization is that all the singular values are simultaneously minimized and the rank may not be well approximated \cite{hu2012fast}. Correspondingly, in this paper, we propose a new multi-stage algorithm, which makes use of the concept of Truncated Nuclear Norm Regularization (TNNR) proposed in \citep{hu2012fast} and Iterative Support Detection (ISD) proposed in \citep{wang2010sparse} to overcome the above limitation. Besides matrix completion problems considered in \citep{hu2012fast}, the proposed method can be also extended to the general low-rank matrix recovery problems. Extensive experiments well validate the superiority of our new algorithms over other state-of-the-art methods.
💡 Research Summary
The paper addresses the problem of recovering a low‑rank matrix from incomplete or compressed measurements, a task that appears in image inpainting, compressive sensing, and medical imaging. Traditional approaches replace the NP‑hard rank minimization with nuclear‑norm minimization (NNM), which penalizes the sum of all singular values. While NNM provides a convex surrogate, it treats every singular value equally, often leading to sub‑optimal rank approximation because the true rank is determined only by the number of significant singular values, not by their magnitudes.
To overcome this limitation, the authors adopt the Truncated Nuclear Norm Regularization (TNNR) introduced by Hu et al. (2012). TNNR minimizes only the smallest (\min(m,n)-r) singular values, i.e. the objective becomes
\
Comments & Academic Discussion
Loading comments...
Leave a Comment