Output Stream of Binding Neuron with Feedback

Reading time: 6 minute
...

📝 Original Info

  • Title: Output Stream of Binding Neuron with Feedback
  • ArXiv ID: 0706.0163
  • Date: 2011-07-20
  • Authors: Researchers from original ArXiv paper

📝 Abstract

The binding neuron model is inspired by numerical simulation of Hodgkin-Huxley-type point neuron, as well as by the leaky integrate-and-fire model. In the binding neuron, the trace of an input is remembered for a fixed period of time after which it disappears completely. This is in the contrast with the above two models, where the postsynaptic potentials decay exponentially and can be forgotten only after triggering. The finiteness of memory in the binding neuron allows one to construct fast recurrent networks for computer modeling. Recently, the finiteness is utilized for exact mathematical description of the output stochastic process if the binding neuron is driven with the Poissonian input stream. In this paper, the simplest networking is considered for binding neuron. Namely, it is expected that every output spike of single neuron is immediately fed into its input. For this construction, externally fed with Poissonian stream, the output stream is characterized in terms of interspike interval probability density distribution if the binding neuron has threshold 2. For higher thresholds, the distribution is calculated numerically. The distributions are compared with those found for binding neuron without feedback, and for leaky integrator. Sample distributions for leaky integrator with feedback are calculated numerically as well. It is oncluded that even the simplest networking can radically alter spikng statistics. Information condensation at the level of single neuron is discussed.

💡 Deep Analysis

Deep Dive into Output Stream of Binding Neuron with Feedback.

The binding neuron model is inspired by numerical simulation of Hodgkin-Huxley-type point neuron, as well as by the leaky integrate-and-fire model. In the binding neuron, the trace of an input is remembered for a fixed period of time after which it disappears completely. This is in the contrast with the above two models, where the postsynaptic potentials decay exponentially and can be forgotten only after triggering. The finiteness of memory in the binding neuron allows one to construct fast recurrent networks for computer modeling. Recently, the finiteness is utilized for exact mathematical description of the output stochastic process if the binding neuron is driven with the Poissonian input stream. In this paper, the simplest networking is considered for binding neuron. Namely, it is expected that every output spike of single neuron is immediately fed into its input. For this construction, externally fed with Poissonian stream, the output stream is characterized in terms of interspik

📄 Full Content

The main function of a neuron is to receive signals and to send them out. In real neurons, this function is realized through concrete biophysical mechanism, the main parts of which are ion channels in excitable membrane and variations of ionic concentrations inside and outside of nerve cell and its processes, see (Schmidt 1975) for details. The same function might be realized by means of any other mechanism able to support signals processing in the manner, which is characteristic of a real neuron. If so, then it would be interesting to develop a model, which realizes in an abstract form a concept of signal processing in real neurons, and is exempted from necessity to follow any biophysical mechanism supporting the processing. Such a model is necessary for quantitative mathematical formulation of what is going during signals/information processing in neural systems, see (van Hemmen 2007) for discussion. Attempts to develop such a model are mainly concentrated around concepts of coincidence detector and temporal integrator, see discussion in (König et al. 1996). One more model, the binding neuron (BN), is proposed in (Vidybida 1998). This model is inspired by numerical simulation of Hodgkin-Huxley-type neuron stimulated from many synaptic inputs (Vidybida 1996), as well as by the leaky integrate-and-fire model (Segundo at all. 1968). It describes functioning of a neuron in terms of events, which are input and output spikes, and degree of temporal coherence between the input events, see (Vidybida 1998(Vidybida , 2007) ) for details.

It is observed, that during processing of sensory signals, the spiking statistics of individual neurons changes substantially when the signal travels from periphery to more central areas (see, e.g. (Eggermont 1991)). The changing of spiking statistics could underlie the information condensation, which happens during perception (König and Krüger 2006). This transformation of statistics may happen due to feedforward and feedback connections between neurons involved in the processing. Having in mind such possibilities, it would be interesting to check what happens with spike train statistical properties when it passes neuronal structures with feedback connections.

Usually, feedback/recurrent connections are considered between several neurons. In this paper we consider the simplest possibility, namely, the single neuron with feedback. Such a configuration, which we regard as the simplest possi-ble networking, can be found in real biological objects (see, e.g. (Aroniadou-Anderjaska et al. 1999;Nicoll and Jahr 1982)). As neuronal model we use binding neuron as it allows to obtain exact mathematical expressions suitable for further analysis. It is expected that input stream in any synapse of the neuron is Poisson one. In this case, from mathematical point of view, all inputs can be replaced with a single one with Poisson stream in it, having its intensity equal to the sum of all intensities in the synapses (Fig. 1, top). The binding neuron works as follows. Any input impulse is stored in the neuron during time τ and then it is forgotten. When the number of stored impulses, Σ , becomes equal to, or larger then the threshold one, N 0 , the neuron sends an output spike, clears its internal memory and is ready to receive impulses from the input stream. One obtains the binding neuron with feedback (BNF) by immediate feeding each output impulse to the neuron’s input (Fig. 1, bottom). In this case, just after firing, the neuron has one impulse in its internal memory, and this impulse has time to live equal τ.

The specifics of mathematical analysis of BN-type systems is due to presence in those systems both deterministic and stochastic dynamics. Namely, the neuron obtains its input from a random stream (stochastic component) and every impulse is stored for the same fixed period of time (deterministic component). This is in the contrast with the mass service theory (Khinchin 1955), where the service time (counterpart of time to live, τ) is random, Poisson-distributed. The simultaneous presence of deterministic and random dynamics in real neurons is due to the fact that in real neurons the impulse existence in a neuron (exposed as the excitatory postsynaptic potential) is supported by electrochemical transient (Hodgkin and Huxley 1952), which is deterministic, whereas the input impulses come from other neurons and external media in irregular (random) manner 1 .

It is widely accepted that during flow of sensory signals in a hierarchical manner from sensory periphery to central brain areas, the information, which is present in the signals, becomes less analogue and more discrete, eventually resulting in representing discrete symbols or entities (see e.g. (König and Krüger 2006)). During this process, the amount of information within the flow must decrease in order to map various input spike trains from the sensory periphery into the same discrete entity. This process of consecutive reduction of i

…(Full text truncated)…

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut