On Bootstrap Percolation in Living Neural Networks
Recent experimental studies of living neural networks reveal that their global activation induced by electrical stimulation can be explained using the concept of bootstrap percolation on a directed random network. The experiment consists in activating externally an initial random fraction of the neurons and observe the process of firing until its equilibrium. The final portion of neurons that are active depends in a non linear way on the initial fraction. The main result of this paper is a theorem which enables us to find the asymptotic of final proportion of the fired neurons in the case of random directed graphs with given node degrees as the model for interacting network. This gives a rigorous mathematical proof of a phenomena observed by physicists in neural networks.
💡 Research Summary
The paper provides a rigorous mathematical treatment of the global activation observed in living neural networks when a small random fraction of neurons is externally stimulated. The authors model the neural tissue as a directed random graph whose vertices represent neurons and whose edges represent directed synaptic connections. Each vertex i is assigned an in‑degree d_i^{in} and an out‑degree d_i^{out} drawn independently from prescribed degree distributions P_in and P_out, thereby capturing the heterogeneous and asymmetric connectivity typical of biological networks.
The dynamical rule is a bootstrap percolation process with threshold θ_i = 1 (the simplest case). At time t = 0 a fraction p₀ of vertices is activated independently; thereafter, any inactive vertex becomes active at the next discrete time step if it receives at least one active incoming edge. Once a vertex becomes active it remains so forever. This “once‑activated‑forever” rule mirrors the irreversible firing of a neuron after it reaches its firing threshold.
To analyse the asymptotic behaviour as the number of neurons n → ∞, the authors employ the locally tree‑like approximation, which is justified for sparse random graphs. They introduce the generating functions G_in(x) = Σ_k P_in(k) x^k and G_out(x) = Σ_k P_out(k) x^k, together with the average out‑degree ⟨d^{out}⟩. By tracking the probability q that a randomly chosen edge points to an active vertex in the limit, they derive a self‑consistency (fixed‑point) equation:
q = p₀ + (1 – p₀)·