Graph neural networks uncover structure and functions underlying the activity of simulated neural assemblies

Reading time: 5 minute
...

📝 Original Info

  • Title: Graph neural networks uncover structure and functions underlying the activity of simulated neural assemblies
  • ArXiv ID: 2602.13325
  • Date: 2026-02-11
  • Authors: ** 논문에 명시된 저자 정보가 제공되지 않았습니다. **

📝 Abstract

Graph neural networks trained to predict observable dynamics can be used to decompose the temporal activity of complex heterogeneous systems into simple, interpretable representations. Here we apply this framework to simulated neural assemblies with thousands of neurons and demonstrate that it can jointly reveal the connectivity matrix, the neuron types, the signaling functions, and in some cases hidden external stimuli. In contrast to existing machine learning approaches such as recurrent neural networks and transformers, which emphasize predictive accuracy but offer limited interpretability, our method provides both reliable forecasts of neural activity and interpretable decomposition of the mechanisms governing large neural assemblies.

💡 Deep Analysis

📄 Full Content

We have shown that GNNs can decompose complex dynamical systems into interpretable representations (Allier et al., 2024). We jointly learned pairwise interaction functions and local update rules together with a latent representation of the different objects present in the system, and an implicit representation of external stimuli. This approach can resolve the complexity arising from heterogeneity in large N-body systems that are affected by complex external inputs. Here, we leverage this technique to model the simulated activity of large heterogeneous neural assemblies. We retrieve the connectivity matrix, neuron types, signaling functions, local update rules, and external stimuli from activity data alone, yielding a fully functional mechanistic approximation of the original network with excellent roll-out performance.

Prior studies were successful in retrieving functional connectivity, functional properties of neurons, or predictive models of neural activity at the population level, but a method to infer an interpretable mechanistic model of complex neural assemblies is yet missing. Mi et al. (2021) infer the hidden voltage dynamics and functional connectivity from calcium recordings of parts of the C. elegans nervous system using GNN (c). Each neuron (node i) receives activity signals x j from connected neurons (node j), processed by a transfer function ψ * and weighted by the matrix W. The sum of these messages is updated with functions ϕ * and Ω * to obtain the predicted activity rate ẋi . In addition to the observed activity x i , the GNN has access to learnable latent vectors a i associated with each node i.

a simple biophysical neuron model. While its predictive performance is modest, it demonstrates that integrating real activity recordings with connectivity can yield experimentally testable predictions beyond simulation. Pospisil et al. (2024) used the connectome of Drosophila melanogaster (Dorkenwald et al., 2024) to create simulations of full brain activity and inferred effective connectivity from known perturbations of individual neurons with a simple linear model. Lacking information about connectivity, Wang et al. (2025) trained a foundation model of mouse visual cortex activity on real two-photon recordings from 135,000 neurons, which predicts responses to both natural videos and out-of-distribution stimuli across animals. Prasse and Van Mieghem (2022) focus on accurate prediction without recovering connectivity while Yu, Ding, and Li (2025) infer symbolic dynamical equations without validating connectivity weights. Both work with simulated data and do not learn neuronal heterogeneity.

Our method learns the functional properties of individual neurons and their connections, enabling transfer of the learned dynamics to different network configurations with modified connectivity or different neuron type compositions.

We simulated the activity of neural assemblies with the model described by Stern, Istrate, and Mazzucato (2023). We represent the neural system as a graph of N nodes corresponding to neurons and edges representing weighted synaptic connections. The activity signal x i represents neural dynamics. Each neuron (node i) receives signals from connected neurons (nodes j) and updates its own activity x i according to ẋi = -

These systems can generate activity across a wide range of time scales similar to what is observed between cortex regions. The damping effect (first term) is parameterized by τ, the self-coupling (second term) is parameterized by s, and g scales the aggregated messages. The matrix W contains the synaptic weights multiplying the transfer function ψ ij (x j ). The weights were drawn from a Cauchy distribution with µ = 0 and σ 2 = 1 N . A positive and negative sign of the weights indicates excitation and inhibition, respectively. The last term η i (t) is Gaussian noise with zero mean. In our experiments, the number of neurons N was 1,000 or 8,000. We set g i to 10, and used values between 0.25 and 8 for τ and s to test different selfcoupling regimes as suggested by Stern, Sompolinsky, and Abbott (2014). First, we chose tanh(x) for both ϕ(x) and ψ ij (x). Later, we made the function ψ ij dependent on the neuron or the interaction between two neurons by changing ψ ij (x j ) to tanh(

respectively, with γ and θ parameterizing different neuron types. Finally, we introduced external stimuli into the dynamics through a time-dependent function Ω i (t) that scales the aggregated messages. The model used in our simulations is therefore ẋi = -

(2)

In Supplementary Table 1, we list the parameters used for each experiment.

The optimized neural networks are ϕ * , ψ * , modeled as MLPs (ReLU activation, hidden dimension = 64, 3 layers, output size = 1), and Ω * modeled as a coordinatebased MLP (Sitzmann et al., 2020, input size = 3, hidden dimension = 128, 5 layers, output size = 1, ω = 0.3). Other learnables are the two-dimensional latent vector a i associated with each neuro

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut