A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule including passive forgetting and different time scales for neuronal activity and learning dynamics. Previous numerical works have reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on the neural network evolution. Furthermore, we show that the sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.
💡 Research Summary
The paper provides a rigorous mathematical treatment of how Hebbian learning, augmented with a passive forgetting term and distinct time scales for neuronal activity and synaptic adaptation, reshapes the dynamics and topology of discrete‑time random recurrent neural networks (RRNNs). The authors start by defining the network: each neuron’s state evolves according to a nonlinear activation function φ applied to a weighted sum of its inputs, while the synaptic weight matrix W(t) is initially drawn from an Erdős‑Rényi random graph. The learning rule is a generalized Hebbian update
W_{ij}(t+1) = (1‑1/τ_w) W_{ij}(t) + η x_i(t) x_j(t) – η α δ_{ij},
where η is the learning rate, τ_w≫1 sets a slow learning time constant, and α controls diagonal decay. This formulation captures the essential separation of fast neuronal dynamics from slow synaptic plasticity.
A central analytical tool is the Jacobian J(t)=∂F/∂x evaluated at the current state and weight configuration. Because J(t) = φ′(·) W(t), its eigenvalue spectrum directly reflects both the instantaneous network topology and the local gain of the activation function. The authors compute the largest Lyapunov exponent λ_max from the leading eigenvalue of the product of Jacobians over time. They demonstrate that as Hebbian learning proceeds, the average magnitude of the weights shrinks (due to the forgetting term), causing the spectral radius of J(t) to contract. Consequently λ_max moves from positive (chaotic regime) through zero to negative values, indicating a cascade of bifurcations: chaos → periodic orbits → stable fixed points.
Beyond dynamics, the paper examines how the weight matrix’s graph structure evolves. Using standard network metrics—average degree ⟨k⟩, clustering coefficient C, and degree distribution—the authors show that learning induces a systematic sparsification. Strong connections coalesce into a core subgraph while peripheral links fade, a process that mirrors the contraction of the Jacobian spectrum and reinforces stability.
The most novel contribution concerns functional sensitivity to a learned pattern ξ. By linearizing the response ∂x/∂ξ, the authors relate it to the inverse of (I‑J), where I is the identity matrix. Near the critical point λ_max≈0, the dominant eigenvalue of J approaches 1, making (I‑J) ill‑conditioned and amplifying the pattern‑induced perturbation. This amplification is quantified through the Fisher information matrix I(θ), which peaks precisely when λ_max is close to zero. Hence the network’s ability to discriminate, store, and retrieve a pattern is maximal at the edge of chaos.
The paper situates these results within the broader literature on “critical brain” hypotheses. Prior numerical studies reported that Hebbian learning drives chaotic recurrent networks toward ordered regimes via a sequence of bifurcations; the present work supplies a formal Jacobian‑based explanation and links it to structural graph changes. The authors argue that operating near λ_max≈0 offers a sweet spot: the system remains sufficiently sensitive to external inputs (high computational flexibility) while retaining enough stability to preserve learned memories.
In conclusion, the study establishes three key insights: (1) a two‑time‑scale Hebbian rule can be analytically captured by Jacobian dynamics, (2) learning simultaneously reshapes the synaptic graph (making it sparser) and contracts the Jacobian spectrum, leading to a transition from chaos to fixed points, and (3) the regime where the largest Lyapunov exponent is near zero maximizes pattern sensitivity and information storage. These findings have implications for designing artificial recurrent networks that exploit critical dynamics for efficient learning, and they provide a theoretical foundation for why biological neural circuits might operate at the edge of chaos to balance robustness and adaptability.
Comments & Academic Discussion
Loading comments...
Leave a Comment