Sequences of events in noise-driven excitable systems with slow variables often show serial correlations among their intervals of events. Here, we employ a master equation for general non-renewal processes to calculate the interval and count statistics of superimposed processes governed by a slow adaptation variable. For an ensemble of spike-frequency adapting neurons this results in the regularization of the population activity and an enhanced post-synaptic signal decoding. We confirm our theoretical results in a population of cortical neurons.
Statistical models of events assuming the renewal property, that the instantaneous probability for the occurrence of an event depends uniquely on the time since the last event, enjoys a long history of interest and applications in physics. However, many event processes in nature violate the renewal property. For instance, it is known that photon emission in multilevel quantum systems constitutes a non-renewal process [1]. Likewise, the time series of earthquakes typically exhibits a memory of previous shocks [2], as do the times of activated escape from a metastable state, as encountered in various scientific fields such as chemical, biological, and solid state physics [3]. Often, the departure from the renewal property arises when the process under study is modulated by some slow variable, which results in serial correlations among the intervals between successive events. In particular, the majority of spiking neurons in the nervous systems of different species show a serial dependence between inter-event intervals (ISI) due to the fact that their spiking activity is modulated by an intrinsic slow variable of self-inhibition, a phenomenon known as spikefrequency adaptation [4].
In this letter, we present a non-renewal formalism based on a population density treatment that enables us to quantitatively study ensemble processes augmented with a slow noise variable. We formally derive general expressions for the higher-order interval and count statistics of single and superimposed non-renewal processes for arbitrary observation times. In spiking neurons, intrinsic mechanisms of adaptation reduce output variability and facilitate population coding in neural ensembles. We confirm our theoretical results in a set of experimental in vivo recordings and analyse their implications for the read-out properties of a postsynaptic neural decoder.
Non-renewal Master Equation. We define the limiting probability density for an event given the state variable
x by the so-called hazard function h x (x, t) where t denotes explicit dependence on time due to external input following [5,6]. Here, we assume x has a shot-noise-like dynamics, which is widely used as a model of spike in-
where δ is the Dirac delta function, t k is the time of the 54 k th event, and q is the quantile change in x at each event.
The dynamics of x deviates from standard treatments of 56 shot-noise (such as in [7]) in that the rate of events has 57 a dependence on x as expressed by the hazard function
Much insight can be gained by applying the method of 62 characteristics [8] to establish a link between the state 63 variable x and its time-like variable t x . For Eq. (1)
we define t x = η(x) := -τ ln(x/q), whereby d dt t x = 1.
When an event occurs, t x → ψ(t x ), where ψ(t x ) = 66 η(η -1 (t x ) + q) = -τ ln(e -tx/τ + 1) with its inverse 67 given by ψ(t x ) -1 = -τ ln(e -tx/τ -1). Thus, we de-
where Θ 0 (t x ) is the Heaviside step function, and for con-and it is derived as Pr * (t x ) = h(t x eq (t x )/r eq , where 86 r eq = h(t x )Pr eq (t x )dt x is a normalizing constant and 87 also the process intensity or rate of the ensemble. Simi-88 larly, one can derive the distribution of t x just after the event, Pr
Then the 90 relationship between t x and the ordinary ISI distribution 91 can be written as
where
where ρn+1 (s, t n+1
x
x )] and L is 103 the Laplace transform with resepect to time, assuming [10]. Next, defining the opera-105 tor P n (s) and applying Bra-Kat notation as suggested 106 in [10], leads to the Laplace transform of n th events or-
where the operator P associated with ρ(s), which inter-109 estingly corresponds to the moment generating function 110 of the sum of n non-independent intervals fn (s) as de-111 fined in [11]. Now, following Eqs. (2.15) in [11] Laplace 112 transform of count distribution denoted as P (n, s).
The Fano factor provides an index for the quantifica-
where I is the identity operator. Note, assuming a re-122 newal interval distribution in Eq. ( 4) one obtains Ãr s = 123 ρ(s)/(1 -ρ(s)) and L -1 [r eq Ãs ] = r eq A(u) is the joint 124 density of an event at time t and another event at time 125 t + u. Thus, the autocorrlation of events is A(u) = 126 r eq [δ(u) + A(u)]. Now, by using Eq. ( 7) and the Eq.
127 in [11], the second moment of the count statistics 128 can be derived. Thus, we obtain the Fano factor
The asymptotic property of F = lim T →∞ J T can be de-130 rived from the result stated in Eq. (7.8) in [11] as 9) where ξ k is the linear correlation coefficient between two 132 k lagged intervals. Provided the limit exits, we find [12]. we plug ř and L[ Ǎ(u)] into the Eq. ( 9) and therefore Eq. ( 9) are simular. Thus, we obtian
The left hand side of Eq. ( 10) is indeed the Fano factor 154 F of the ensemble process as desired. Now, [13] sug-
where a t and b t are determined by the time dependent
where the second term is the approximation of r eq A(u).
Using Eq. ( 7) we obtain We now compute th
This content is AI-processed based on open access ArXiv data.