Neural networks with transient state dynamics

Neural networks with transient state dynamics
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We investigate dynamical systems characterized by a time series of distinct semi-stable activity patterns, as they are observed in cortical neural activity patterns. We propose and discuss a general mechanism allowing for an adiabatic continuation between attractor networks and a specific adjoined transient-state network, which is strictly dissipative. Dynamical systems with transient states retain functionality when their working point is autoregulated; avoiding prolonged periods of stasis or drifting into a regime of rapid fluctuations. We show, within a continuous-time neural network model, that a single local updating rule for online learning allows simultaneously (i) for information storage via unsupervised Hebbian-type learning, (ii) for adaptive regulation of the working point and (iii) for the suppression of runaway synaptic growth. Simulation results are presented; the spontaneous breaking of time-reversal symmetry and link symmetry are discussed.


💡 Research Summary

The paper addresses a fundamental discrepancy between classical attractor neural networks and the transient, semi‑stable activity patterns observed in cortical recordings. While attractor models converge to fixed points, real cortical dynamics exhibit sequences of metastable states that persist for a finite time before spontaneously transitioning to the next pattern. To bridge this gap, the authors propose a unified framework that allows an adiabatic continuation from a conventional attractor network to a strictly dissipative transient‑state network.

The core of the model is a continuous‑time neural system described by differential equations for neuronal activities (x_i(t)) and synaptic weights (w_{ij}(t)). The activity dynamics follow a leaky integrator form (\tau \dot{x}i = -x_i + f\big(\sum_j w{ij}x_j + I_i\big)), where (f) is a sigmoidal activation function and (I_i) denotes external input. Synaptic updates are governed by a single local rule that combines three components: (1) an unsupervised Hebbian term (\eta x_i x_j) that stores correlations, (2) a homeostatic regulation term that monitors the network’s global mean activity (\langle x\rangle) and variance (\sigma_x) and rescales weights to keep these statistics close to predefined targets (\mu) and (\sigma^*), and (3) a dissipative term proportional to the activity derivative, (-\alpha \dot{x}_i), which guarantees that the system never settles permanently in any attractor. The weight update can be written compactly as
\


Comments & Academic Discussion

Loading comments...

Leave a Comment