Optimal designs for Lasso and Dantzig selector using Expander Codes

Optimal designs for Lasso and Dantzig selector using Expander Codes
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We investigate the high-dimensional regression problem using adjacency matrices of unbalanced expander graphs. In this frame, we prove that the $\ell_{2}$-prediction error and the $\ell_{1}$-risk of the lasso and the Dantzig selector are optimal up to an explicit multiplicative constant. Thus we can estimate a high-dimensional target vector with an error term similar to the one obtained in a situation where one knows the support of the largest coordinates in advance. Moreover, we show that these design matrices have an explicit restricted eigenvalue. Precisely, they satisfy the restricted eigenvalue assumption and the compatibility condition with an explicit constant. Eventually, we capitalize on the recent construction of unbalanced expander graphs due to Guruswami, Umans, and Vadhan, to provide a deterministic polynomial time construction of these design matrices.


💡 Research Summary

The paper addresses the fundamental problem of high‑dimensional linear regression where the number of covariates $p$ far exceeds the sample size $n$. Classical results on the Lasso and the Dantzig selector rely on random design matrices (Gaussian or sub‑Gaussian) and on probabilistic versions of the Restricted Eigenvalue (RE) or Restricted Isometry Property (RIP). While these random constructions guarantee optimal error rates up to constants, they do not provide explicit, deterministic matrices and the constants involved are often hidden.

The authors propose a completely different approach: they use the adjacency matrices of unbalanced expander graphs as deterministic design matrices. An unbalanced $(k,\epsilon)$‑expander is a bipartite graph with left side of size $p$, right side of size $n$, left degree $d$, and the property that every subset $S$ of left vertices with $|S|\le k$ has at least $(1-\epsilon)d|S|$ distinct neighbours. Translating this combinatorial property into linear algebra yields a $0/1$ matrix $X\in\mathbb{R}^{n\times p}$ whose columns are $d$‑sparse and whose mutual coherence is tightly controlled.

The first technical contribution is a rigorous proof that such matrices satisfy the RE condition with an explicit constant
\


Comments & Academic Discussion

Loading comments...

Leave a Comment