Conditional regression for the Nonlinear Single-Variable Model
Regressing a function $F$ on $\mathbb{R}^d$ without the statistical and computational curse of dimensionality requires special statistical models, for example that impose geometric assumptions on the distribution of the data (e.g., that its support is low-dimensional), or strong smoothness assumptions on $F$, or a special structure $F$. Among the latter, compositional models $F=f\circ g$ with $g$ mapping to $\mathbb{R}^r$ with $r\ll d$ include classical single- and multi-index models, as well as neural networks. While the case where $g$ is linear is well-understood, less is known when $g$ is nonlinear, and in particular for which $g$’s the curse of dimensionality in estimating $F$, or both $f$ and $g$, may be circumvented. Here we consider a model $F(X):=f(Π_γX)$ where $Π_γ:\mathbb{R}^d\to[0,\textrm{len}_γ]$ is the closest-point projection onto the parameter of a regular curve $γ:[0, \textrm{len}_γ]\to\mathbb{R}^d$, and $f:[0,\textrm{len}_γ]\to \mathbb{R}^1$. The input data $X$ is not low-dimensional: it can be as far from $γ$ as the condition that $Π_γ(X)$ is well-defined allows. The distribution $X$, the curve $γ$ and the function $f$ are all unknown. This model is a natural nonlinear generalization of the single-index model, corresponding to $γ$ being a line. We propose a nonparametric estimator, based on conditional regression, that under suitable assumptions, the strongest of which being that $f$ is coarsely monotone, achieves, up to log factors, the $\textit{one-dimensional}$ optimal min-max rate for non-parametric regression, up to the level of noise in the observations, and be constructed in time $\mathcal{O}(d^2 n\log n)$. All the constants in the learning bounds, in the minimal number of samples required for our bounds to hold, and in the computational complexity are at most low-order polynomials in $d$.
💡 Research Summary
The paper addresses the fundamental challenge of high‑dimensional non‑parametric regression, where the minimax rate n^{‑s/(2s+d)} deteriorates rapidly with the ambient dimension d. Classical ways to avoid this “curse of dimensionality” either assume that the covariate distribution lives on a low‑dimensional manifold, or impose a compositional structure F = f ∘ g with a linear inner map g (e.g., single‑ or multi‑index models). While the linear‑inner case is well‑understood, little is known when g is nonlinear.
The authors introduce a new structural model:
\
Comments & Academic Discussion
Loading comments...
Leave a Comment