A Markovian event-based framework for stochastic spiking neural networks
In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.
💡 Research Summary
The paper addresses a fundamental question in computational neuroscience: whether the temporal sequence of spikes in a stochastic spiking neural network can be described solely by the spike times, without reference to the underlying membrane potential dynamics. To answer this, the authors develop an event‑based framework for noisy integrate‑and‑fire (I‑F) neurons and prove that the collection of spike times forms a Markov chain. The key insight is that, under the assumption of independent Gaussian white noise driving each neuron, the next spike time of any neuron depends only on the current network state—namely the recent spike times of all neurons, the synaptic weights, transmission delays, and the refractory status. This sufficiency makes the spike train a discrete‑time stochastic process with well‑defined transition probabilities.
Mathematically, each neuron obeys the stochastic differential equation (\tau \dot V_i = -V_i + I_i(t) + \xi_i(t)), where (\xi_i(t)) is Gaussian white noise. When the membrane potential reaches a threshold (\theta), a spike is emitted, the potential is reset, and absolute and relative refractory mechanisms are applied. By solving the first‑passage time problem for this stochastic process, the authors obtain the inter‑spike interval (ISI) density (f_i(\Delta t)) for each neuron. The ISI density depends on the mean and variance of the input current, which in turn are functions of synaptic inputs, delays, and noise in synaptic integration.
The network’s spike sequence ({(t_k,n_k)}) (spike at time (t_k) from neuron (n_k)) is then shown to satisfy a Markov property:
\
Comments & Academic Discussion
Loading comments...
Leave a Comment