Models of attractor dynamics in the brain
Attractor dynamics are a fundamental computational motif in neural circuits, supporting diverse cognitive functions through stable, self-sustaining patterns of neural activity. In these lecture notes, we review four key examples that demonstrate how autoassociative neural network models can elucidate the computational mechanisms underlying attractor-based information processing in biological neural systems performing cognitive functions. Drawing on empirical evidence, we explore hippocampal spatial representations, visual classification in the inferotemporal cortex, perceptual adaptation and priming, and working-memory biases shaped by sensory history. Across these domains, attractor network models reveal common computational principles and provide analytical insights into how experience shapes neural activity and behavior. Our synthesis underscores the value of attractor models as powerful tools for probing the neural basis of cognition and behavior.
💡 Research Summary
This paper reviews and synthesizes four representative cases in which attractor dynamics—stable, self‑sustaining patterns of neural activity arising from recurrent connectivity—provide mechanistic explanations for diverse cognitive functions. The authors focus on point attractors implemented by auto‑associative networks, illustrating how these models capture experimental observations in the hippocampus, inferior temporal (IT) cortex, perceptual adaptation/priming, and working‑memory biases.
In the hippocampal case, the classic morph‑environment experiment of Wills et al. (2005) is re‑interpreted through a CA3 auto‑associative network that stores discrete spatial maps. When rats are exposed to intermediate shapes, place‑cell ensembles switch abruptly between the two stored maps, a phenomenon the authors model as a hysteretic transition between point attractors. The discussion acknowledges contrasting findings (e.g., Leutgeb et al., 2005) that show graded transitions, arguing that the hippocampus can operate in either discrete or continuous regimes depending on input ambiguity, prior learning, and task demands.
The second case examines visual categorization in IT cortex using the morphing paradigm of Akrami et al. (2009). Early neuronal responses (100–200 ms) encode stimulus similarity linearly, whereas later responses (200–500 ms) converge non‑linearly toward the “effective” endpoint, producing an asymmetric attractor basin. The authors construct a two‑layer, 2,500‑neuron auto‑associative network with sparse feed‑forward input and recurrent Hebbian connectivity. Simulations reveal that when the network approaches its storage capacity (≈160 patterns) and includes spike‑frequency adaptation, the model reproduces the observed asymmetric convergence. Low memory load yields overly broad attraction, while storing both endpoints eliminates the asymmetry, suggesting that experimental sampling bias (recording neurons selective for one endpoint) underlies the empirical asymmetry.
The third section integrates perceptual adaptation and priming within an attractor framework. Using facial‑emotion priming experiments, the authors show that recent exposure to a happy or angry face biases perception of a subsequent neutral face in opposite directions. They model this by combining a short‑term adaptation term that pushes neural activity away from the recent stimulus (repulsion) with a stored categorical attractor that pulls ambiguous inputs toward the learned emotion (attraction). The interaction of these forces accounts for the coexistence of adaptation‑induced repulsion and priming‑induced attraction.
Finally, the paper addresses working‑memory biases by coupling two auto‑associative modules (e.g., prefrontal and posterior cortices). Sensory history sets the initial state of each module, and recurrent dynamics drive the system toward one of several stored memory states. The resulting bias in recall reflects the shift of the attractor basin caused by prior experience, illustrating how attractor dynamics can mediate history‑dependent decision making.
Across all examples, the authors emphasize several key principles: (1) point attractors enable discrete memory storage and rapid pattern completion; (2) network capacity limits and spike‑frequency adaptation shape the sharpness and asymmetry of attractor basins; (3) experimental sampling biases can masquerade as computational asymmetries; and (4) the brain can flexibly switch between discrete and continuous attractor regimes depending on task constraints. Mathematical analyses (e.g., hysteresis curves, capacity α≈0.14, adaptation parameters c, b₁, b₂) are paired with extensive simulations that quantitatively match empirical data. The paper concludes that attractor network models constitute a powerful, unifying framework for probing how recurrent circuitry transforms experience into stable neural representations that guide perception, memory, and behavior.
Comments & Academic Discussion
Loading comments...
Leave a Comment