Transposable regularized covariance models with an application to missing data imputation
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so-called transposable regularized covariance models allow for maximum likelihood estimation of the mean and nonsingular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
💡 Research Summary
This paper tackles the pervasive problem of missing‑value estimation in high‑dimensional matrix‑structured data, where the data matrix is “transposable” – that is, rows and columns can each be regarded as sets of features. Classical matrix‑variate normal models assume a single overall mean vector and a single covariance structure, which is inadequate when rows and columns possess distinct mean patterns and correlation structures, as is typical in genomics, recommender systems, and other modern applications.
To address this limitation, the authors introduce the mean‑restricted matrix‑variate normal distribution. In this formulation the observed data matrix (X\in\mathbb{R}^{n\times p}) is decomposed as
\
Comments & Academic Discussion
Loading comments...
Leave a Comment