Responses to transient perturbation can distinguish intrinsic from latent criticality in spiking neural populations

Responses to transient perturbation can distinguish intrinsic from latent criticality in spiking neural populations
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The critical brain hypothesis posits that neural circuitry operates near criticality to reap the computational benefits of accessing a wide range of timescales. The theory of critical phenomena generally predicts heavy-tailed (power-law) correlations in space and time near criticality, but it has been argued that in the brain such correlations could be inherited from ``latent variables,’’ such as external sensory signals that are not directly observed when recording from neural circuitry. Distinguishing whether heavy-tailed correlations in neural activity are intrinsically generated within a neural circuit or are driven by unobserved latent variables is crucial for properly interpreting circuit functions. We argue that measuring neural responses to sudden perturbative inputs, rather than correlations in ongoing activity, can disambiguate these cases. We demonstrate this approach in a model of stochastic spiking neuron populations receiving external latent input that can be tuned to a critical state. We propose a scaling theory for the covariance and response functions of the spiking network, which we validate with simulations. We end by discussing how our approach might generalize to models of neural populations with more realistic biophysical details.


💡 Research Summary

The paper tackles a long‑standing debate in neuroscience: whether the heavy‑tailed correlations observed in neural activity reflect genuine critical dynamics generated by recurrent circuitry, or whether they are merely inherited from unobserved common inputs (latent variables). To resolve this, the authors propose a novel experimental paradigm that focuses on the transient response of a spiking neural population to a large, brief perturbation, rather than on steady‑state correlations.

They model a population of stochastic spiking neurons using a nonlinear Hawkes process. Each neuron’s membrane potential V_i evolves according to
τ dV_i/dt = −V_i + E + J ∑j (w{ij} − δ_{ij}) · n_j + x_i(t),
and spikes are generated as a Poisson process with rate ϕ(V_i)= (1+e^{−V_i})^{−1}. The recurrent connectivity is encoded in the adjacency matrix w_{ij} and the synaptic strength J.

External “latent” input x_i(t) is modeled as a non‑equilibrium φ⁴‑type field (referred to as Model A) that can be tuned to its own critical point by adjusting a self‑coupling parameter r. Its dynamics are
τ dx_i/dt = −g x_i³ + ∑j (w{ij} − r δ_{ij}) x_j + η_i(t),
with η_i a zero‑mean Gaussian white noise of variance 2σ². By varying r the input network undergoes a transition from a zero‑mean phase to a symmetry‑broken phase, belonging to the Ising universality class.

The key theoretical contribution is a scaling analysis of two observables: (1) the population‑averaged response R_{±}(t) to a step change ΔV = ±5 mV applied simultaneously to all neurons, and (2) the autocovariance C_{\dot n\dot n}(t) of the spike trains in the unperturbed steady state. For a network tuned to its synaptic critical point (J ≈ J_c) and driven at its input critical point (r ≈ r_c), the response is predicted to follow
R_{\dot n V}(t) ∼ t^{-(d‑2+η*)/(2z*)} F(t/ξ_spk),
with d = 3, η* ≈ 0.036 and z* ≈ 2.02 (close to mean‑field values). The autocovariance, in the absence of input, should scale as
C_{\dot n\dot n}(t) ∼ t^{-(d‑2+η)/z} G(t/ξ_spk).
When latent input is present, an additional term proportional to σ² t^{‑d/2+1} min(ξ_spk,ξ_lat)² appears, reflecting the contribution of the input fluctuations. The coherence times ξ_spk and ξ_lat diverge as the respective systems approach criticality, leading to a crossover in the scaling functions.

Simulations were performed on a 25³ cubic lattice (15 625 neurons) with a timestep of 0.1 ms. After allowing the coupled system to reach equilibrium, the authors recorded autocovariances for 15 s and response curves for another 15 s following a perturbation. They averaged over 25 spatially adjacent neurons (to mimic limited experimental sampling) and over 400 independent trials for the response, 10 trials for the covariance. Four regimes were examined: (i) both network and input sub‑critical, (ii) network critical / input sub‑critical, (iii) network sub‑critical / input critical, and (iv) both critical.

Results confirmed the theoretical predictions. When the spiking network was near its critical synaptic strength, the response R_{\dot n V}(t) displayed a clear power‑law decay, independent of the state of the latent input. In contrast, the autocovariance always showed a heavy tail when either the network or the input was critical, but the tail was truncated by finite‑size effects and by the crossover set by ξ_spk/ξ_lat. Data‑collapse analyses using the predicted scaling forms (Eqs. 4 and 5) yielded excellent agreement for the response function and, to a lesser extent, for the covariance when the network was close to criticality. The quality of the collapse deteriorated when the network was far from J_c, consistent with the expectation that scaling holds only near the phase transition.

The authors argue that measuring trial‑averaged responses to repeated, large‑scale perturbations effectively averages out the contribution of common inputs, thereby isolating the intrinsic critical dynamics of the recurrent circuit. This contrasts with traditional analyses that rely on covariance or correlation matrices, which conflate internal dynamics with external drive. Moreover, the response‑based method is robust to partial sampling of the population, a realistic constraint in electrophysiology and calcium imaging.

In the discussion, the paper acknowledges that the model assumes a regular cubic lattice, homogeneous synaptic weights, and a one‑to‑one mapping between latent units and neurons—simplifications that do not hold in real cortical tissue. Nevertheless, prior work has shown that the Hawkes‑type network exhibits criticality even without spatial structure, suggesting that the core conclusions may generalize. The authors highlight several avenues for future work: (1) incorporating heterogeneous connectivity and disorder, which can shift critical exponents or smear the transition; (2) allowing multiple latent inputs per neuron and more complex input topologies; (3) extending the framework to biologically realistic neuron models (e.g., conductance‑based, adaptive exponential integrate‑and‑fire) and to experimental paradigms such as optogenetic perturbations or sensory‑evoked transients.

Overall, the study provides a concrete, experimentally tractable strategy to distinguish intrinsic from latent sources of criticality in neural circuits. By focusing on the temporal relaxation after a controlled perturbation, researchers can directly probe the scaling laws that are hallmarks of a genuine phase transition, thereby advancing our understanding of whether the brain truly exploits critical dynamics for computation.


Comments & Academic Discussion

Loading comments...

Leave a Comment