Finite integration time can shift optimal sensitivity away from criticality

Reading time: 5 minute
...

📝 Original Info

  • Title: Finite integration time can shift optimal sensitivity away from criticality
  • ArXiv ID: 2602.09491
  • Date: 2026-02-10
  • Authors: ** 논문에 명시된 저자 정보가 제공되지 않았습니다. (arXiv:2602.09491v1, 2026년 2월 10일) **

📝 Abstract

Sensitivity to small changes in the environment is crucial for many real-world tasks, enabling living and artificial systems to make correct behavioral decisions. It has been shown that such sensitivity is maximized when a system operates near the critical point of a phase transition. However, proximity to criticality introduces large fluctuations and diverging timescales. Hence, to leverage the maximal sensitivity, it would require impractically long integration periods. Here, we analytically and computationally demonstrate how the optimal tuning of a recurrent neural network is determined given a finite integration time. Rather than maximizing the theoretically available sensitivity, we find networks attain different sensitivities depending on the available time. Consequently, the optimal dynamic regime can shift away from criticality when integration times are finite, highlighting the necessity of incorporating finite-time considerations into studies of information processing.

💡 Deep Analysis

📄 Full Content

Living systems must efficiently encode relevant environmental information while being sensitive to small changes. Increasing evidence suggests that many natural systems tackle this challenge by operating near a critical phase transition [1]. Signatures of near-critical dynamics have been observed across different scales, from collective behaviors in flocks of birds [2] to cellular diversity in stem cell populations [3], and most notably in the brain [4][5][6][7][8]. The proposed advantage of operating near a critical point is that phase transitions endow systems with computational benefits, including elevated sensitivity and correlation [9,10], maximized dynamic range [11], enhanced information flow [12][13][14], optimal input representation [15,16], and a diverse spectrum of dynamical responses [17].

Operating in the vicinity of a critical phase transition offers significant advantages but comes with inherent challenges. While enhanced sensitivity of critical systems makes them ideal for some tasks, it also increases their vulnerability to noise, further amplified by critical slowing down [18,19]. A recent example of this is decision-making by integrated Ising models, where operating at a distance from a phase transition allows to control the trade-off between reaction time and error rate [20]. More generally, such a trade-off can be formulated as an optimization problem with a control parameter λ (in our case changing the distance to criticality) that regulates both beneficial gain G(λ) and detrimental loss L(λ) with some weighting factor γ, i.e., λ * = arg max λ {G(λ) -γL(λ)} .

(

Both gains and losses depend on the particularities of the system. Thus, the optimal tuning of λ, and thereby the optimal distance to criticality, will have to depend on the specific system and requirements of each task [21]. For example, fish schools balance reaction time and energy cost in their alarmed state [22], while neuromorphic computing and artificial networks adjust their state to match memory requirements for optimal functioning [23,24]. Despite these observations, it remains a challenge to quantitatively assess the trade-off between gain and loss that would determine an optimal distance from criticality.

A famous example of how criticality can assist encoding in the brain is the dynamic range. The dynamic range quantifies the range of continuous input features that can be encoded by the nonlinear firing-rate response of a neuron. It is commonly defined as the logarithmic range of inputs h for which the output is between 10 th and 90 th percentile of all outputs [11], i.e., ∆ = 10 log 10 (h 0.9 /h 0.1 ), selected to exclude responses that would not be distinguishable from the noise floor at low activity and saturation regime at high activity. Examples include encoding of correlations in the visual field [25], odor concentration [26] and sound level [27,28].

Unfortunately, the dynamic range of a single cell is usually much smaller than the dynamic range of perception. This dynamic-range problem can be solved with the emergent properties from recurrent interactions, which were shown to drastically increase the dynamic range as the network approaches criticality [11,29,30]. Exploiting closeto-critical emergence was also observed in structures with heterogeneous [31], modular [32], or hierarchical [33] organization. However, previous work neglected the emerging close-to-criticality population activity fluctuations that can hinder confidence in discrimination.

In this work, we combine analytical calculations, numerical simulations, and machine-learning approximations to quantify the optimal balance between input discrimination confidence and the sensitivity of a recurrent neural network, controlled by its recurrent interaction strength λ and the timescale T of a leaky readout (Fig. 1). To arXiv:2602.09491v1 [cond-mat.dis-nn] 10 Feb 2026 The first inputs that are discriminable from zero and full activity mark the dynamic range (black dashed lines). From these, we can construct sets of discriminable inputs marked by the black triangles (see text for details).

formalize this optimization problem, we introduce two generalized measures of dynamic range derived from the discriminability of inputs and provide analytical results for the limiting cases of instantaneous readout and infinite integration time. We find that the optimal state, λ * , of the network depends on the required confidence and integration time, with a safety margin from the precise critical point for all finite integration times. We consider a random network of probabilistic spiking neurons that can be activated externally and recurrently (see Methods for details). To mimic processing and transmission, only a random subset of neurons receives input, while another random subset of neurons serves as output (Fig. 1a). Input neurons receive uncorrelated, independent Poisson spike trains with a rate h, which represents the input strength. The recurrent intera

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut