Effects of Hebbian learning on the dynamics and structure of random networks with inhibitory and excitatory neurons
The aim of the present paper is to study the effects of Hebbian learning in random recurrent neural networks with biological connectivity, i.e. sparse connections and separate populations of excitatory and inhibitory neurons. We furthermore consider that the neuron dynamics may occur at a (shorter) time scale than synaptic plasticity and consider the possibility of learning rules with passive forgetting. We show that the application of such Hebbian learning leads to drastic changes in the network dynamics and structure. In particular, the learning rule contracts the norm of the weight matrix and yields a rapid decay of the dynamics complexity and entropy. In other words, the network is rewired by Hebbian learning into a new synaptic structure that emerges with learning on the basis of the correlations that progressively build up between neurons. We also observe that, within this emerging structure, the strongest synapses organize as a small-world network. The second effect of the decay of the weight matrix spectral radius consists in a rapid contraction of the spectral radius of the Jacobian matrix. This drives the system through the ``edge of chaos’’ where sensitivity to the input pattern is maximal. Taken together, this scenario is remarkably predicted by theoretical arguments derived from dynamical systems and graph theory.
💡 Research Summary
The paper investigates how Hebbian learning reshapes both the dynamics and the connectivity of random recurrent neural networks that respect two key biological constraints: sparse connections and a clear separation between excitatory (E) and inhibitory (I) neurons. The authors first construct a network of N units with a fixed E:I ratio (typically 4:1) and assign random initial weights drawn from a zero‑mean Gaussian distribution. Connections are sparse (5–20 % density) and obey Dale’s principle, i.e., E→* synapses are positive while I→* synapses are negative. Neuronal activity evolves on a fast time scale according to a continuous‑time differential equation
dx/dt = −x + φ(W x + I_ext),
where φ is a sigmoidal non‑linearity. Synaptic plasticity, in contrast, proceeds on a slower time scale and follows a modified Hebbian rule that includes passive forgetting:
Δw_ij = η x_i x_j − γ w_ij.
Here η is the learning rate and γ the forgetting constant. Learning proceeds in alternating phases: a “training” phase in which an external input pattern is presented and the network settles, followed by a weight‑update step; and a “free” phase where the input is removed and the network evolves autonomously. This protocol mimics activity‑dependent plasticity interleaved with spontaneous dynamics observed in cortical circuits.
The authors monitor several quantitative indicators before, during, and after learning. The Frobenius norm of the weight matrix, ‖W‖_F, shrinks exponentially as learning proceeds, indicating a global contraction of synaptic strength. Correspondingly, the spectral radius ρ(W) falls below unity, and the Jacobian’s spectral radius ρ(J) – which determines local stability – follows the same trend. Initially the system exhibits chaotic dynamics (ρ(J) > 1, positive Lyapunov exponent). As learning progresses, ρ(J) approaches 1, placing the network at the so‑called “edge of chaos”. In this regime the system is maximally sensitive to small perturbations: the mutual information between input patterns and subsequent network states peaks, and the ability to discriminate between different inputs is highest. Once ρ(J) drops below 1, the dynamics become contractive, the Lyapunov exponent turns negative, and the network settles into low‑entropy, highly predictable trajectories. Entropy measured from the time series of neuronal states declines by roughly 30–40 % after learning, confirming a transition from rich, high‑dimensional activity to a more ordered regime.
Beyond dynamics, the authors examine how the connectivity pattern itself is reorganized. After learning, they retain only the strongest synapses (top 5–10 % by absolute value) and reconstruct a graph. This “pruned” network shows a marked reduction in average shortest‑path length and a substantial increase in clustering coefficient relative to the original random graph, hallmarks of a small‑world topology. The strongest connections are predominantly excitatory‑to‑excitatory (E→E), forming densely interlinked clusters that act as hubs for information flow, while inhibitory connections become more diffuse, preserving overall excitation‑inhibition balance. The emergence of a small‑world structure suggests that Hebbian learning not only tunes individual weights but also self‑organizes the network into an efficient wiring scheme that minimizes wiring cost while maximizing functional integration.
Parameter sweeps reveal that the forgetting constant γ critically shapes the trajectory through phase space. Small γ (weak forgetting) leads to rapid contraction of ‖W‖_F and a swift passage through the edge of chaos into an overly stable regime, which reduces computational flexibility. Large γ (strong forgetting) prevents sufficient weight growth, keeping ρ(J) near 1 for an extended period and maintaining high sensitivity but at the cost of slower learning. An intermediate γ (≈0.02 in the authors’ simulations) yields the longest dwell time at the edge of chaos, balancing learning speed and dynamical richness. Similar robustness is observed across different initial sparsities and learning rates η.
The paper’s central contributions can be summarized as follows:
- Spectral Contraction: Hebbian learning with passive forgetting systematically reduces the spectral radius of both the weight matrix and the Jacobian, driving the system from chaotic to contractive dynamics.
- Edge‑of‑Chaos Navigation: The contraction naturally brings the network to the edge of chaos, where input‑output sensitivity and information processing capacity are maximized.
- Emergent Small‑World Wiring: The strongest synapses self‑organize into a small‑world graph, providing short communication paths and high clustering, which are known to support efficient computation in both biological and artificial networks.
- Role of Forgetting: The forgetting term γ acts as a tunable knob that controls how long the network lingers near the critical regime, offering a mechanistic explanation for the balance between plasticity and stability observed in cortical circuits.
Overall, the study bridges dynamical‑systems theory, graph theory, and neurobiology to show that a simple, biologically plausible learning rule can simultaneously sculpt network topology and drive the system toward a computationally optimal dynamical regime. These insights have practical implications for designing artificial recurrent networks that require both adaptability and stability, suggesting that incorporating controlled weight decay (analogous to γ) and encouraging sparse, excitatory‑dominant connectivity may yield networks that naturally operate near the edge of chaos with small‑world efficiency.