Predictive Low Rank Matrix Learning under Partial Observations: Mixed-Projection ADMM

Predictive Low Rank Matrix Learning under Partial Observations: Mixed-Projection ADMM
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We study the problem of learning a partially observed matrix under the low rank assumption in the presence of fully observed side information that depends linearly on the true underlying matrix. This problem consists of an important generalization of the Matrix Completion problem, a central problem in Statistics, Operations Research and Machine Learning, that arises in applications such as recommendation systems, signal processing, system identification and image denoising. We formalize this problem as an optimization problem with an objective that balances the strength of the fit of the reconstruction to the observed entries with the ability of the reconstruction to be predictive of the side information. We derive a mixed-projection reformulation of the resulting optimization problem and present a strong semidefinite cone relaxation. We design an efficient, scalable alternating direction method of multipliers algorithm that produces high quality feasible solutions to the problem of interest. Our numerical results demonstrate that in the small rank regime ({\color{black}$k \leq 10$}), our algorithm outputs solutions that achieve on average {\color{black}$2.3%$} lower objective value and {\color{black}$41%$} lower $\ell_2$ reconstruction error than the solutions returned by the best performing benchmark method on synthetic data. The runtime of our algorithm is competitive with and often superior to that of the benchmark methods. Our algorithm is able to solve problems with $n = 10000$ rows and $m = 10000$ columns in less than a minute. On large scale real world data, our algorithm produces solutions that achieve $67%$ lower out of sample error than benchmark methods in $97%$ less execution time.


💡 Research Summary

The paper addresses a natural extension of the classic matrix completion problem: recovering a low‑rank matrix from a subset of its entries while simultaneously leveraging fully observed side information that is assumed to be a linear function of the underlying matrix. Formally, given a partially observed matrix (A\in\mathbb{R}^{n\times m}) with observation set (\Omega) and side‑information matrix (Y\in\mathbb{R}^{n\times d}) satisfying (Y = A\alpha + N) (where (\alpha) is an unknown weight matrix and (N) is noise), the authors propose the following optimization model:

\


Comments & Academic Discussion

Loading comments...

Leave a Comment