Fast Inference of Interactions in Assemblies of Stochastic Integrate-and-Fire Neurons from Spike Recordings

Fast Inference of Interactions in Assemblies of Stochastic   Integrate-and-Fire Neurons from Spike Recordings
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We present two Bayesian procedures to infer the interactions and external currents in an assembly of stochastic integrate-and-fire neurons from the recording of their spiking activity. The first procedure is based on the exact calculation of the most likely time courses of the neuron membrane potentials conditioned by the recorded spikes, and is exact for a vanishing noise variance and for an instantaneous synaptic integration. The second procedure takes into account the presence of fluctuations around the most likely time courses of the potentials, and can deal with moderate noise levels. The running time of both procedures is proportional to the number S of spikes multiplied by the squared number N of neurons. The algorithms are validated on synthetic data generated by networks with known couplings and currents. We also reanalyze previously published recordings of the activity of the salamander retina (including from 32 to 40 neurons, and from 65,000 to 170,000 spikes). We study the dependence of the inferred interactions on the membrane leaking time; the differences and similarities with the classical cross-correlation analysis are discussed.


💡 Research Summary

The paper introduces two Bayesian inference algorithms for estimating synaptic couplings and external currents in networks of stochastic leaky integrate‑and‑fire (LIF) neurons from recorded spike times. The first algorithm, termed the “fixed‑threshold” or zero‑noise method, exploits the fact that when the noise variance σ² is negligible the most probable trajectory of each neuron’s membrane potential V*(t) can be computed exactly. By maximizing the log‑likelihood (which reduces to minimizing the action of a Gaussian noise process) under the constraints that V*(t) starts at reset (0) after a spike and reaches the firing threshold Vth at the next observed spike, the authors derive a second‑order differential equation for V*(t). This equation is transformed into a first‑order LIF dynamics driven by an “optimal noise” η*(t) that follows a deterministic exponential law. Two types of contacts are identified: active contacts, where V*(t) touches Vth exactly at the arrival of a presynaptic spike, and passive contacts, where V*(t) brushes the threshold without generating a spike. By scanning the ordered list of incoming spikes within each inter‑spike interval (ISI) and solving for the minimal constant η that yields a contact without overshooting, the algorithm constructs V*(t) piecewise analytically. The computational cost scales as O(S·N²), where S is the total number of spikes and N the number of neurons, making it suitable for large recordings.

The second algorithm, called the “moving‑threshold” or moderate‑noise method, addresses the realistic situation where σ is not vanishing. Here the first‑passage time (FPT) density of an Ornstein‑Uhlenbeck process is used to approximate the probability of the observed spike times given the parameters. The log‑likelihood is expressed as a sum over ISIs of log p_FPT, where p_FPT depends on the deterministic part of the trajectory (identical to the zero‑noise case) and on the variance introduced by the finite σ. The optimal noise η*(t) now satisfies a linear differential equation with exponential decay, and the likelihood is maximized numerically with respect to the coupling matrix J and the external currents I. This approach retains the O(S·N²) scaling while providing accurate estimates for moderate noise levels.

Both methods are validated on synthetic data generated from known networks. When σ≈0, the fixed‑threshold algorithm recovers J and I with near‑perfect accuracy. For σ comparable to 0.2 I·√τ (τ being the membrane leak time constant), the moving‑threshold algorithm reduces estimation error substantially compared with a naïve extension of the zero‑noise method. The authors also apply the algorithms to extracellular recordings from the salamander retina, comprising 32–40 ganglion cells and 65 000–170 000 spikes. By varying τ, they show that inferred excitatory couplings decrease with longer leak times, whereas inhibitory couplings remain relatively stable, highlighting the physiological relevance of the leak parameter. Comparisons with classical cross‑correlation analyses reveal that the Bayesian inference captures indirect network effects and yields a more parsimonious description of functional connectivity.

Key contributions of the work are: (1) an exact, fast computation of the most probable membrane potential trajectory for vanishing noise, (2) a principled extension to moderate noise levels using Ornstein‑Uhlenbeck first‑passage statistics, (3) a computational complexity that grows only linearly with the number of spikes and quadratically with the number of neurons, and (4) demonstration on real, large‑scale neural data where the inferred connectivity exhibits sensible dependence on biophysical parameters. Limitations include the assumption of instantaneous synaptic integration, Gaussian white noise, and time‑invariant external currents. Future extensions could incorporate synaptic kernels, colored noise, and time‑varying stimuli, thereby broadening applicability to more complex cortical recordings and to the design of biologically inspired artificial neural networks. Overall, the paper provides a powerful and scalable framework for reconstructing functional connectivity from spike trains in stochastic spiking neural networks.


Comments & Academic Discussion

Loading comments...

Leave a Comment