Random Walk Initialization for Training Very Deep Feedforward Networks
Training very deep networks is an important open problem in machine learning. One of many difficulties is that the norm of the back-propagated error gradient can grow or decay exponentially. Here we show that training very deep feed-forward networks (FFNs) is not as difficult as previously thought. Unlike when back-propagation is applied to a recurrent network, application to an FFN amounts to multiplying the error gradient by a different random matrix at each layer. We show that the successive application of correctly scaled random matrices to an initial vector results in a random walk of the log of the norm of the resulting vectors, and we compute the scaling that makes this walk unbiased. The variance of the random walk grows only linearly with network depth and is inversely proportional to the size of each layer. Practically, this implies a gradient whose log-norm scales with the square root of the network depth and shows that the vanishing gradient problem can be mitigated by increasing the width of the layers. Mathematical analyses and experimental results using stochastic gradient descent to optimize tasks related to the MNIST and TIMIT datasets are provided to support these claims. Equations for the optimal matrix scaling are provided for the linear and ReLU cases.
💡 Research Summary
The paper addresses the long‑standing problem of vanishing or exploding gradients in very deep feed‑forward neural networks (FFNs). While recurrent networks (RNNs) suffer from exponential growth or decay of gradients because the same weight matrix is repeatedly applied during back‑propagation through time, FFNs apply a different random matrix at each layer. The authors model the back‑propagation of the error gradient as a product of independent random matrices and show that the logarithm of the gradient norm follows a random walk. By choosing a global scaling factor g appropriately, this random walk can be made unbiased, i.e., its expected value stays near zero, while its variance grows only linearly with depth and inversely with layer width.
Mathematically, the network is defined by a linear transformation with a global scale g, a non‑linearity f, and biases. The back‑propagation equation is δₙ = g ˜Wₙ₊₁ δₙ₊₁, where ˜Wₙ contains the element‑wise derivative f′(aₙ) multiplied by the transpose of the forward weight matrix. The squared norm evolves as |δₙ|² = g² zₙ₊₁ |δₙ₊₁|² with zₙ = ‖˜Wₙ δₙ‖²/‖δₙ‖². The overall ratio of the input‑layer to output‑layer gradient norms is Z = |δ₀|²/|δ_D|² = g^{2D} ∏{n=1}^{D} zₙ. Taking logs yields ln Z = D ln g² + Σ{n=1}^{D} ln zₙ, a sum of i.i.d. random variables. If the entries of ˜Wₙ are i.i.d. Gaussian with variance 1/N (N = layer width), then ˜Wₙ δₙ/‖δₙ‖ is a Gaussian vector independent of δₙ, and zₙ follows a χ² distribution scaled by 1/N. Using a first‑order expansion around z=1, the expectation E
Comments & Academic Discussion
Loading comments...
Leave a Comment