Short-Term Memory Through Persistent Activity: Evolution of Self-Stopping and Self-Sustaining Activity in Spiking Neural Networks
Memories in the brain are separated in two categories: short-term and long-term memories. Long-term memories remain for a lifetime, while short-term ones exist from a few milliseconds to a few minutes. Within short-term memory studies, there is debate about what neural structure could implement it. Indeed, mechanisms responsible for long-term memories appear inadequate for the task. Instead, it has been proposed that short-term memories could be sustained by the persistent activity of a group of neurons. In this work, we explore what topology could sustain short-term memories, not by designing a model from specific hypotheses, but through Darwinian evolution in order to obtain new insights into its implementation. We evolved 10 networks capable of retaining information for a fixed duration between 2 and 11s. Our main finding has been that the evolution naturally created two functional modules in the network: one which sustains the information containing primarily excitatory neurons, while the other, which is responsible for forgetting, was composed mainly of inhibitory neurons. This demonstrates how the balance between inhibition and excitation plays an important role in cognition.
💡 Research Summary
The paper investigates how short‑term memory (STM) could be implemented in the brain by evolving spiking neural networks (SNNs) that maintain persistent activity for a prescribed interval and then automatically cease firing. The authors deliberately avoid imposing any preconceived circuit architecture; instead, they let a genetic algorithm (GA) discover the connectivity that satisfies the task. Each network consists of five input neurons, sixty hidden neurons, and a single output neuron. The hidden layer is fully recurrent (except for self‑connections) and its synaptic weights and the excitatory/inhibitory nature of each hidden neuron are encoded in a genome of 3,965 real‑valued genes. Neurons are modeled with the Izhikevich equations (regular spiking parameters a = 0.02, b = 0.2, c = ‑65 mV, d = 6), providing a biologically plausible yet computationally efficient spiking dynamics.
The task is defined as follows: during a 1‑second stimulation phase all five input neurons receive an identical external drive, which excites the network. After the stimulus ends, the network must keep the output neuron spiking continuously for a target duration (ranging from 2 s to 11 s, in 1‑second increments) without any external input, and then stop spiking for at least 4 s. Fitness is computed using a Gaussian‑shaped function that rewards output spikes that cease close to the target stopping time and penalizes late spikes heavily. To accelerate evolution, the authors employ a “continuous evolution” strategy: they first evolve a network that solves the 2‑second condition, then switch the fitness function to the 3‑second condition while keeping the population, and repeat up to 11 seconds. This approach leverages the fact that individuals near the fitness peak for a shorter duration often already possess the structure needed for a slightly longer duration.
Across ten independent runs, the GA produced ten distinct networks, each reliably achieving one of the target durations. Detailed analysis of the hidden layer revealed a consistent functional segregation: a “maintenance module” composed mainly of excitatory neurons that sustains recurrent activity after the stimulus, and a “forgetting module” dominated by inhibitory neurons that, after the prescribed interval, suppress the maintenance module and force the output neuron to become silent. This emergent modularity demonstrates that a balance of excitation and inhibition can autonomously generate both the persistence and the controlled termination of activity—key requirements for STM.
To assess the rarity of such solutions, the authors generated 100,000 random networks. Only 118 exhibited any self‑sustained activity for the full 7‑second experimental window, and none displayed the precise stopping behavior required for the task. Moreover, many networks could sustain activity but never entered a silent phase, highlighting that forgetting is a far more difficult property to evolve than mere persistence.
The findings support non‑synaptic theories of STM that emphasize persistent firing rather than rapid synaptic plasticity (e.g., STDP). The emergence of an inhibitory “forgetting” subcircuit aligns with experimental observations that specific interneuron populations contribute to memory decay. By showing that such structures can arise without explicit design, the work suggests that the brain might exploit similar evolutionary pressures to shape circuits capable of flexible, time‑limited information storage.
In conclusion, the study demonstrates that evolutionary algorithms can uncover plausible neural architectures for short‑term memory, revealing a natural division into excitatory maintenance and inhibitory termination modules. This provides a fresh computational perspective on how excitation‑inhibition balance could underlie both the retention and the controlled loss of transient information in cortical circuits, and offers a valuable template for designing artificial systems that require temporally bounded memory without relying on synaptic weight changes.
Comments & Academic Discussion
Loading comments...
Leave a Comment