Adaptive Lasso for High Dimensional Regression and Gaussian Graphical Modeling
We show that the two-stage adaptive Lasso procedure (Zou, 2006) is consistent for high-dimensional model selection in linear and Gaussian graphical models. Our conditions for consistency cover more general situations than those accomplished in previous work: we prove that restricted eigenvalue conditions (Bickel et al., 2008) are also sufficient for sparse structure estimation.
💡 Research Summary
This paper revisits the two‑stage Adaptive Lasso (Zou, 2006) and establishes its model‑selection consistency for high‑dimensional linear regression and Gaussian graphical models under the restricted eigenvalue (RE) condition introduced by Bickel, Ritov, and Tsybakov (2008). The authors argue that the traditional consistency proofs for Adaptive Lasso rely on strong assumptions such as the irrepresentable condition or mutual incoherence, which are often violated in practice. By replacing these with the much milder RE condition, they broaden the applicability of Adaptive Lasso to a far larger class of high‑dimensional problems.
Problem setting
- Linear regression: (y = X\beta^{} + \varepsilon) with (X \in \mathbb{R}^{n \times p}), (p \gg n), and a sparse true coefficient vector (\beta^{}) having (s) non‑zero entries.
- Gaussian graphical model (GGM): (X \sim N(0,\Sigma^{})) with precision matrix (\Theta^{} = (\Sigma^{})^{-1}) that is sparse; (\Theta^{}_{jk}=0) iff variables (j) and (k) are conditionally independent.
Methodology
- Stage‑1 – Obtain an initial estimator (\hat\beta^{\text{init}}) using the ordinary Lasso (or a scaled least‑squares variant) with tuning parameter (\lambda_{1}).
- Weight construction – Define adaptive weights (w_{j}=|\hat\beta^{\text{init}}_{j}|^{-\gamma}) (commonly (\gamma=1)). Large true coefficients receive small weights, reducing their penalty in the second stage.
- Stage‑2 – Solve the weighted Lasso problem
\
Comments & Academic Discussion
Loading comments...
Leave a Comment