Provable FDR Control for Deep Feature Selection: Deep MLPs and Beyond

Provable FDR Control for Deep Feature Selection: Deep MLPs and Beyond
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We develop a flexible feature selection framework based on deep neural networks that approximately controls the false discovery rate (FDR), a measure of Type-I error. The method applies to architectures whose first layer is fully connected. From the second layer onward, it accommodates multilayer perceptrons (MLPs) of arbitrary width and depth, convolutional and recurrent networks, attention mechanisms, residual connections, and dropout. The procedure also accommodates stochastic gradient descent with data-independent initializations and learning rates. To the best of our knowledge, this is the first work to provide a theoretical guarantee of FDR control for feature selection within such a general deep learning setting. Our analysis is built upon a multi-index data-generating model and an asymptotic regime in which the feature dimension $n$ diverges faster than the latent dimension $q^{*}$, while the sample size, the number of training iterations, the network depth, and hidden layer widths are left unrestricted. Under this setting, we show that each coordinate of the gradient-based feature-importance vector admits a marginal normal approximation, thereby supporting the validity of asymptotic FDR control. As a theoretical limitation, we assume $\mathbf{B}$-right orthogonal invariance of the design matrix, and we discuss broader generalizations. We also present numerical experiments that underscore the theoretical findings.


💡 Research Summary

This paper introduces a theoretically grounded feature‑selection procedure for deep neural networks that provably controls the false discovery rate (FDR). The authors consider any architecture whose first layer is a dense, fully‑connected linear map; from the second layer onward the network may be an arbitrary‑width/depth multilayer perceptron, a convolutional or recurrent network, an attention module, residual connections, dropout, etc. The method is agnostic to the training protocol: stochastic gradient descent (SGD) with data‑independent initializations and learning‑rate schedules is allowed.

The statistical setting is a multi‑index data‑generating model: the response vector (y\in\mathbb{R}^m) and the design matrix (X\in\mathbb{R}^{m\times n}) satisfy
\


Comments & Academic Discussion

Loading comments...

Leave a Comment