A pivotal transform for the high-dimensional location-scale model

A pivotal transform for the high-dimensional location-scale model
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We study the high-dimensional linear model with noise distribution known up to a scale parameter. With an $\ell_1$-penalty on the regression coefficients, we show that a transformation of the log-likelihood allows for a choice of the tuning parameter not depending on the scale parameter. This transformation is a generalization of the square root Lasso for quadratic loss. The tuning parameter can asymptotically be taken at the detection edge. We establish an oracle inequality, variable selection and asymptotic efficiency of the estimator of the scale parameter and the intercept. The examples include Subbotin distributions and the Gumbel distribution.


💡 Research Summary

The paper addresses high‑dimensional linear regression where the response follows
(y_i = x_i^\top \beta^* + \sigma^* \xi_i),
the noise (\xi_i) having a known density (f) up to an unknown scale (\sigma^). Classical Lasso methods rely on quadratic loss and either assume (\sigma^) is known or estimate it separately; the square‑root Lasso (Belloni et al., 2011; Sun and Zhang, 2012) solves the Gaussian case by taking the square‑root of the residual sum of squares, making the tuning parameter independent of (\sigma^*). However, for non‑Gaussian, log‑concave noise the square‑root trick does not apply.

The authors propose a pivotal transformation of the negative log‑likelihood: they define the empirical risk
(R_n(\beta,\sigma)=\frac{1}{n}\sum_{i=1}^n \ell_{\beta,\sigma}(x_i,y_i))
with (\ell_{\beta,\sigma}(x,y)= -\log f!\big((y-x^\top\beta)/\sigma\big)+\log\sigma).
Applying the exponential map (\phi(u)=\exp(u)) yields the “exp‑Lasso” estimator
\


Comments & Academic Discussion

Loading comments...

Leave a Comment