Dichotomy of Feature Learning and Unlearning: Fast-Slow Analysis on Neural Networks with Stochastic Gradient Descent
The dynamics of gradient-based training in neural networks often exhibit nontrivial structures; hence, understanding them remains a central challenge in theoretical machine learning. In particular, a concept of feature unlearning, in which a neural network progressively loses previously learned features over long training, has gained attention. In this study, we consider the infinite-width limit of a two-layer neural network updated with a large-batch stochastic gradient, then derive differential equations with different time scales, revealing the mechanism and conditions for feature unlearning to occur. Specifically, we utilize the fast-slow dynamics: while an alignment of first-layer weights develops rapidly, the second-layer weights develop slowly. The direction of a flow on a critical manifold, determined by the slow dynamics, decides whether feature unlearning occurs. We give numerical validation of the result, and derive theoretical grounding and scaling laws of the feature unlearning. Our results yield the following insights: (i) the strength of the primary nonlinear term in data induces the feature unlearning, and (ii) an initial scale of the second-layer weights mitigates the feature unlearning. Technically, our analysis utilizes Tensor Programs and the singular perturbation theory.
💡 Research Summary
This paper investigates the phenomenon of feature unlearning in two‑layer neural networks trained with large‑batch stochastic gradient descent (SGD). The authors consider the infinite‑width limit of a student network learning from a single‑index teacher model, where inputs are high‑dimensional Gaussian vectors and the teacher output is generated by a nonlinear link function σ★. Both the teacher and student activation functions are assumed to admit Hermite expansions with polynomially bounded coefficients. The training data arrive online in batches of size n, with the high‑dimensional regime n/d → δ as both n and the input dimension d tend to infinity. The first‑layer weights are renormalized after each SGD step to keep their norm fixed, and the second‑layer weights are initialized to a common constant ȧ>0.
Using the Tensor Programs framework, the authors rigorously derive a deterministic ordinary differential equation (ODE) that describes the evolution of two macroscopic order parameters in continuous time τ = γt/m (γ is the learning rate, m the network width). The order parameters are R(τ), the alignment between a first‑layer weight and the teacher vector, and a(τ), the scale of the second‑layer weights. The ODE reads
(\dot R = \frac12 a(1-R^2)
Comments & Academic Discussion
Loading comments...
Leave a Comment