Attractor Metadynamics in Adapting Neural Networks

Attractor Metadynamics in Adapting Neural Networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Slow adaption processes, like synaptic and intrinsic plasticity, abound in the brain and shape the landscape for the neural dynamics occurring on substantially faster timescales. At any given time the network is characterized by a set of internal parameters, which are adapting continuously, albeit slowly. This set of parameters defines the number and the location of the respective adiabatic attractors. The slow evolution of network parameters hence induces an evolving attractor landscape, a process which we term attractor metadynamics. We study the nature of the metadynamics of the attractor landscape for several continuous-time autonomous model networks. We find both first- and second-order changes in the location of adiabatic attractors and argue that the study of the continuously evolving attractor landscape constitutes a powerful tool for understanding the overall development of the neural dynamics.


💡 Research Summary

The paper investigates how slow adaptation processes—namely synaptic plasticity and intrinsic neuronal plasticity—shape the landscape of fast neural dynamics. The authors introduce the concept of “attractor metadynamics” to describe the continuous evolution of the set of adiabatic attractors (stable fixed points of the fast subsystem) as the network’s internal parameters slowly change. They work with continuous‑time recurrent neural networks (RNNs) described by the equations

\


Comments & Academic Discussion

Loading comments...

Leave a Comment