A LASSO-Penalized BIC for Mixture Model Selection
The efficacy of family-based approaches to mixture model-based clustering and classification depends on the selection of parsimonious models. Current wisdom suggests the Bayesian information criterion (BIC) for mixture model selection. However, the BIC has well-known limitations, including a tendency to overestimate the number of components as well as a proclivity for, often drastically, underestimating the number of components in higher dimensions. While the former problem might be soluble through merging components, the latter is impossible to mitigate in clustering and classification applications. In this paper, a LASSO-penalized BIC (LPBIC) is introduced to overcome this problem. This approach is illustrated based on applications of extensions of mixtures of factor analyzers, where the LPBIC is used to select both the number of components and the number of latent factors. The LPBIC is shown to match or outperform the BIC in several situations.
💡 Research Summary
The paper addresses a fundamental problem in mixture‑model based clustering and classification: the selection of a parsimonious model when the data are high‑dimensional. The Bayesian Information Criterion (BIC) is the de‑facto standard for this task, yet it suffers from two well‑documented deficiencies. First, because its penalty term grows only logarithmically with the sample size, BIC tends to over‑select the number of mixture components in low‑dimensional settings. Second, in high‑dimensional spaces the log‑likelihood term deteriorates rapidly, causing BIC to dramatically under‑estimate the true number of components. The latter issue is especially problematic for applications such as mixtures of factor analyzers (MFA), where the number of latent factors must also be chosen.
To overcome these limitations, the authors propose a LASSO‑penalized BIC (LPBIC). The idea is to augment the usual BIC penalty with an L1‑norm penalty on the model parameters—specifically the mixing proportions and the factor loading matrices. The penalized likelihood becomes
\