Towards Google matrix of brain
We apply the approach of the Google matrix, used in computer science and World Wide Web, to description of properties of neuronal networks. The Google matrix ${\bf G}$ is constructed on the basis of neuronal network of a brain model discussed in PNAS {\bf 105}, 3593 (2008). We show that the spectrum of eigenvalues of ${\bf G}$ has a gapless structure with long living relaxation modes. The PageRank of the network becomes delocalized for certain values of the Google damping factor $\alpha$. The properties of other eigenstates are also analyzed. We discuss further parallels and similarities between the World Wide Web and neuronal networks.
💡 Research Summary
The paper introduces a novel interdisciplinary approach that applies the Google matrix formalism—originally devised for ranking web pages—to the analysis of a large‑scale neuronal network. The authors base their study on the cortical model described in PNAS 105, 3593 (2008), which consists of roughly 10 000 neurons connected by about 30 000 directed synapses. By representing each neuron as a node and each synapse as a directed edge, the network is cast into a directed graph suitable for the construction of a Google matrix (G).
The Google matrix is defined as (G = \alpha S + (1-\alpha) v e^{\top}), where (S) is the column‑stochastic matrix obtained by normalising the outgoing links of each neuron, (v) is a uniform probability vector, (e) is a vector of ones, and (\alpha) (the damping factor) controls the probability of following a genuine synaptic link versus “teleporting’’ to a random neuron. The authors explore several values of (\alpha) (0.85, 0.95, 0.99) to assess how the damping factor influences spectral properties and the stationary distribution (PageRank).
Spectral analysis reveals a strikingly gapless eigenvalue distribution. Unlike random graphs where a clear spectral gap separates the leading eigenvalue ((\lambda=1)) from the bulk, the neuronal Google matrix exhibits a dense cluster of eigenvalues with moduli close to one. This indicates the presence of long‑lived relaxation modes, suggesting that the network can sustain slow dynamical processes—potentially related to memory retention or persistent activity observed in real neural tissue.
The PageRank vector (p(\alpha)) is examined as a measure of node centrality. For low damping ((\alpha=0.85)), (p) is highly localized on a small set of hub neurons, mirroring the classic web‑page ranking where a few highly connected pages dominate. As (\alpha) approaches unity, the distribution becomes markedly delocalized: the entropy of (p) rises sharply, and the probability mass spreads more uniformly across the network. This transition implies that, under conditions where the random‑walk process heavily favours actual synaptic pathways, the brain’s information flow is not confined to a few dominant pathways but is distributed across many neurons, supporting both robustness and flexible integration.
Beyond the leading eigenvector, the authors analyse subdominant eigenvectors. Second‑order eigenvectors often concentrate on specific neuronal clusters, hinting at functional modules that can act as quasi‑independent oscillatory units. Higher‑order eigenvectors display wave‑like patterns extending over large portions of the network, reflecting global modes that could underlie coordinated activity across distant brain regions.
The discussion draws parallels between the World Wide Web and neuronal networks. Both systems share directed, sparse, and approximately scale‑free connectivity, and both benefit from random‑walk based centrality measures to uncover hidden structural hierarchies. However, the brain introduces additional layers of complexity—synaptic weights, plasticity, and time‑dependent rewiring—that are absent in static web graphs. The authors propose that future extensions of the Google matrix framework should incorporate weighted adjacency matrices, adaptive damping factors, and temporal dynamics to capture these biological nuances.
In conclusion, the study demonstrates that the Google matrix is a powerful analytical tool for probing the structural and dynamical organization of large neuronal networks. Its ability to reveal gapless spectra, long‑lived modes, and the delocalization transition of PageRank offers fresh insights into how neural circuits balance localized processing with global integration. This work opens avenues for applying network‑theoretic techniques from computer science to neuroscience, potentially informing the design of more brain‑like artificial neural networks and advancing our understanding of information flow in the brain.
Comments & Academic Discussion
Loading comments...
Leave a Comment