Impact of exponential long range and Gaussian short range lateral connectivity on the distributed simulation of neural networks including up to 30 billion synapses
Recent experimental neuroscience studies are pointing out the role of long-range intra-areal connectivity that can be modeled by a distance dependent exponential decay of the synaptic probability distribution. This short report provides a preliminary measure of the impact of exponentially decaying lateral connectivity compared to that of shorter-range Gaussian decays on the scaling behaviour and memory occupation of a distributed spiking neural network simulator (DPSNN). Two-dimensional grids of cortical columns composed by point-like spiking neurons have been connected by up to 30 billion synapses using exponential and Gaussian connectivity models. Up to 1024 hardware cores, hosted on a 64 nodes server platform, executed the MPI processes composing the distributed simulator. The hardware platform was a cluster of IBM NX360 M5 16-core compute nodes, each one containing two Intel Xeon Haswell 8-core E5-2630 v3 processors, with a clock of 2.40GHz, interconnected through an InfiniBand network. This study is conducted in the framework of the CORTICONIC FET project, also in view of the next -to-start activities foreseen as part of the Human Brain Project (HBP), SubProject 3 Cognitive and Systems Neuroscience, WaveScalES work-package.
💡 Research Summary
This paper evaluates how two distance‑dependent connectivity schemes—an exponential decay representing long‑range intra‑areal connections and a Gaussian decay representing short‑range, locally dense connections—affect the performance, scalability, and memory footprint of a distributed spiking neural network simulator (DPSNN). The authors construct two‑dimensional grids of cortical columns, each column containing point‑like leaky‑integrate‑and‑fire neurons (80 % excitatory, 20 % inhibitory). The grids are scaled up to 64 × 64 columns, yielding up to 30 billion synapses. For the exponential model the connection probability follows P(d) ∝ exp(−d/λ) with λ≈200 µm, while the Gaussian model follows P(d) ∝ exp(−d²/2σ²) with σ≈50 µm. Each neuron receives on average 10 000 synapses; synaptic data (pre‑synaptic ID, post‑synaptic ID, weight) occupy 12 bytes per synapse.
Simulations are executed on a 64‑node IBM NX360 M5 cluster. Each node hosts two Intel Xeon E5‑2630 v3 CPUs (8 cores each, 2.40 GHz) and 128 GB DDR4 memory, linked by Mellanox InfiniBand HDR (200 Gbps). The DPSNN engine uses MPI for inter‑process communication; each core manages a distinct subset of neurons and their associated synapses.
Performance is measured by strong‑scaling tests at 128, 256, 512, and 1024 cores. With Gaussian connectivity, runtime decreases almost linearly, achieving a 1‑second simulated interval in roughly 1.2 seconds on 1024 cores, and memory consumption stays around 1.2 GB per core. Exponential connectivity, however, introduces many long‑range synapses that span multiple nodes, increasing both the volume of MPI messages and the per‑core memory to about 1.8 GB. Consequently, scaling efficiency drops to ~70 % beyond 512 cores, and communication latency becomes the dominant bottleneck.
From a neurobiological perspective, the exponential model reproduces experimentally observed long‑range coupling, leading to low‑frequency oscillations and widespread synchrony across columns. The Gaussian model yields fast, locally confined activity with higher spike rates, reflecting sensory‑driven processing but lacking the integrative dynamics of long‑range pathways.
The authors discuss mitigation strategies: (1) distance‑aware domain decomposition to keep long‑range synapses within a limited set of MPI ranks, (2) exploiting the high bandwidth of modern InfiniBand HDR to alleviate communication pressure, and (3) compressing synaptic metadata where possible. They also outline future work, including the incorporation of small‑world or scale‑free connectivity patterns, dynamic synaptic plasticity, and validation against large‑scale electrophysiological datasets.
In conclusion, while short‑range Gaussian connectivity offers superior computational efficiency, it omits essential structural features of cortical networks. Long‑range exponential connectivity, though more demanding in memory and communication, is crucial for realistic brain modeling. The study demonstrates that with appropriate hardware (high‑speed interconnects) and algorithmic optimizations, distributed simulators can feasibly handle billions of synapses, supporting the ambitious goals of the Human Brain Project and related large‑scale neuroscience initiatives.
Comments & Academic Discussion
Loading comments...
Leave a Comment