Architecture independent generalization bounds for overparametrized deep ReLU networks

Architecture independent generalization bounds for overparametrized deep ReLU networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We prove that overparametrized neural networks are able to generalize with a test error that is independent of the level of overparametrization, and independent of the Vapnik-Chervonenkis (VC) dimension. We prove explicit bounds that only depend on the metric geometry of the test and training sets, on the regularity properties of the activation function, and on the operator norms of the weights and norms of biases. For overparametrized deep ReLU networks with a training sample size bounded by the input space dimension, we explicitly construct zero loss minimizers without use of gradient descent, and prove a uniform generalization bound that is independent of the network architecture. We perform computational experiments of our theoretical results with MNIST, and obtain agreement with the true test error within a 22 % margin on average.


💡 Research Summary

The paper addresses a fundamental puzzle in modern deep learning: why heavily over‑parameterized neural networks often generalize well despite having far more parameters than training samples. The authors focus on deep ReLU networks whose number of parameters far exceeds the size of the training set, a regime they call “strongly over‑parameterized” (the training sample size n is bounded by the input dimension M₀). Under mild assumptions—Lipschitz continuous activation with linear growth, non‑increasing layer widths, and a ReLU nonlinearity—they derive deterministic, data‑dependent generalization bounds that do not involve the VC dimension, the total number of parameters, or the depth of the network.

The analysis proceeds in three stages. First, Proposition 1.2 gives a crude a‑priori bound on the discrepancy between training and test loss, D


Comments & Academic Discussion

Loading comments...

Leave a Comment