On Dynamics of Integrate-and-Fire Neural Networks with Conductance Based Synapses

Reading time: 6 minute
...

📝 Original Info

  • Title: On Dynamics of Integrate-and-Fire Neural Networks with Conductance Based Synapses
  • ArXiv ID: 0709.4370
  • Date: 2010-11-09
  • Authors: Researchers from original ArXiv paper

📝 Abstract

We present a mathematical analysis of a networks with Integrate-and-Fire neurons and adaptive conductances. Taking into account the realistic fact that the spike time is only known within some \textit{finite} precision, we propose a model where spikes are effective at times multiple of a characteristic time scale $\delta$, where $\delta$ can be \textit{arbitrary} small (in particular, well beyond the numerical precision). We make a complete mathematical characterization of the model-dynamics and obtain the following results. The asymptotic dynamics is composed by finitely many stable periodic orbits, whose number and period can be arbitrary large and can diverge in a region of the synaptic weights space, traditionally called the "edge of chaos", a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a one-to-one correspondence between the membrane potential trajectories and the raster plot. This shows that the neural code is entirely "in the spikes" in this case. As a key tool, we introduce an order parameter, easy to compute numerically, and closely related to a natural notion of entropy, providing a relevant characterization of the computational capabilities of the network. This allows us to compare the computational capabilities of leaky and Integrate-and-Fire models and conductance based models. The present study considers networks with constant input, and without time-dependent plasticity, but the framework has been designed for both extensions.

💡 Deep Analysis

Deep Dive into On Dynamics of Integrate-and-Fire Neural Networks with Conductance Based Synapses.

We present a mathematical analysis of a networks with Integrate-and-Fire neurons and adaptive conductances. Taking into account the realistic fact that the spike time is only known within some \textit{finite} precision, we propose a model where spikes are effective at times multiple of a characteristic time scale $\delta$, where $\delta$ can be \textit{arbitrary} small (in particular, well beyond the numerical precision). We make a complete mathematical characterization of the model-dynamics and obtain the following results. The asymptotic dynamics is composed by finitely many stable periodic orbits, whose number and period can be arbitrary large and can diverge in a region of the synaptic weights space, traditionally called the “edge of chaos”, a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a one-to-one correspondence between the membrane potential trajectories and the raster plot. This shows that the neural code is entire

📄 Full Content

Neuronal networks have the capacity to treat incoming information, performing complex computational tasks (see (Rieke, Warland, Steveninck, & Bialek, 1996) for a deep review), including sensory-motor tasks. It is a crucial challenge to understand how this information is encoded and transformed. However, when considering in vivo neuronal networks, information treatment proceeds usually from the interaction of many different functional units having different structures and roles, and interacting in a complex way. As a result, many time and space scales are involved. Also, in vivo neuronal systems are not isolated objects and have strong interactions with the external world, that hinder the study of a specific mechanism (Frégnac, 2004). In vitro preparations are less subject to these restrictions, but it is still difficult to design specific neuronal structure in order to investigate the role of such systems regarding information treatment (Koch & Segev, 1998). In this context models are often proposed, sufficiently close from neuronal networks to keep essential biological features, but also sufficiently simplified to achieve a characterization of their dynamics, the most often numerically and, when possible, analytically (Gerstner & Kistler, 2002b;Dayan & Abbott, 2001). This is always a delicate compromise. At one extreme, one reproduces all known features of ionic channels, neurons, synapses ... and lose the hope to have any (mathematics and even numeric) control on what is going on. At the other extreme, over-simplified models can lose important biological features. Moreover, sharp simplifications may reveal exotic properties which are in fact induced by the model itself, but do not exist in the real system. This is a crucial aspect in theoretical neuroscience, where one must not forget that models are subject to hypothesis and have therefore intrinsic limits.

For example, it is widely believed that one of the major advantages of the Integrate-and-Fire (IF) model is its conceptual simplicity and analytical tractability that can be used to explore some general principles of neurodynamics and coding. However, though the first IF model was introduced in 1907 by Lapicque (Lapicque, 1907) and though many important analytical and rigorous results have been published, there are essential parts missing in the state of the art in theory concerning the dynamics of IF neurons (see e.g. (Mirollo & Strogatz, 1990;Ernst, Pawelzik, & Geisel, 1995;Senn & Urbanczik, 2001;Timme, Wolf, & Geisel, 2002;Memmesheimer & Timme, 2006;Gong & Leeuwen, 2007;Jahnke, Memmesheimer, & Timme, 2008) and references below for analytically solvable network models of spiking neurons). Moreover, while the analysis of an isolated neuron submitted to constant inputs is straightforward, the action of a periodic current on a neuron reveals already an astonishing complexity and the mathematical analysis requires elaborated methods from dynamical systems theory (Keener, Hoppensteadt, & Rinzel, 1981;Coombes, 1999b;Coombes & Bressloff, 1999). In the same way, the computation of the spike train probability distribution resulting from the action of a Brownian noise on an IF neuron is not a completely straightforward exercise (Knight, 1972;Gerstner & Kistler, 2002a;Brunel & Sergi, 1998;Brunel & Latham, 2003;Touboul & Faugeras, 2007) and may require rather elaborated mathematics. At the level of networks the situation is even worse, and the techniques used for the analysis of a single neuron are not easily extensible to the network case. For example, Bressloff and Coombes (Bressloff & Coombes, 2000b) have extended the analysis in (Keener et al., 1981;Coombes, 1999b;Coombes & Bressloff, 1999) to the dynamics of strongly coupled spiking neurons, but restricted to networks with specific architectures and under restrictive assumptions on the firing times. Chow and Kopell (Chow & Kopell, 2000) studied IF neurons coupled with gap junctions but the analysis for large networks assumes constant synaptic weights. Brunel and Hakim (Brunel & Hakim, 1999) extended the Fokker-Planck analysis combined to a mean-field approach to the case of a network with inhibitory synaptic couplings but under the assumptions that all synaptic weights are equal. However, synaptic weight variability plays a crucial role in the dynamics, as revealed e.g. using mean-field methods or numerical simulations (VanVreeswijk & Hansel, 1997;VanVreeswijk & Sompolinsky, 1998;VanVreeswijk, 2004). Mean-field methods allow the analysis of networks with random synaptic weights (Amari, 1972;Sompolinsky, Crisanti, & Sommers, 1988;Cessac, Doyon, Quoy, & Samuelides, 1994;Cessac, 1995;Hansel & Mato, 2003;Soula, Beslon, & Mazet, 2006;Samuelides & Cessac, 2007) but they require a “thermodynamic limit” where the number of neurons tends to infinity and finite-size corrections are rather difficult to obtain. Moreover, the rigorous derivation of the mean-field equations, that requires large-deviations techniques (B

…(Full text truncated)…

📸 Image Gallery

cover.png page_2.webp page_3.webp

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut