Network geometry of the Drosophila brain

Reading time: 5 minute
...

📝 Original Info

  • Title: Network geometry of the Drosophila brain
  • ArXiv ID: 2602.16417
  • Date: 2026-02-18
  • Authors: ** - 논문 저자: (원문에 명시되지 않음 – 실제 논문에서는 저자명과 소속을 확인 필요) **

📝 Abstract

The recent reconstruction of the Drosophila brain provides a neural network of unprecedented size and level of details. In this work, we study the geometrical properties of this system by applying network embedding techniques to the graph of synaptic connections. Since previous analysis have revealed an inhomogeneous degree distribution, we first employ a hyperbolic embedding approach that maps the neural network onto a point cloud in the two-dimensional hyperbolic space. In general, hyperbolic embedding methods exploit the exponentially growing volume of hyperbolic space with increasing distance from the origin, allowing for an approximately uniform spatial distribution of nodes even in scale-free, small-world networks. By evaluating multiple embedding quality metrics, we find that the network structure is well captured by the resulting two-dimensional hyperbolic embedding, and in fact is more congruent with this representation than with the original neuron coordinates in three-dimensional Euclidean space. In order to examine the network geometry in a broader context, we also apply the well-known Euclidean network embedding approach Node2vec, where the dimension of the embedding space, $d$ can be set arbitrarily. In 3 dimensions, the Euclidean embedding of the network yields lower quality scores compared to the original neuron coordinates. However, as a function of the embedding dimension the scores show an improving tendency, surpassing the level of the 2d hyperbolic embedding roughly at $d=16$, and reaching a maximum around $d=64$. Since network embeddings can serve as valuable inputs for a variety of downstream machine learning tasks, our results offer new perspectives on the structure and representation of this recently revealed and biologically significant neural network.

💡 Deep Analysis

📄 Full Content

Over the past two decades, network-based approaches have emerged as a powerful framework for describing and analysing complex systems. By representing interactions among system components as graphs, this perspective has revealed universal organizing principles across domains ranging from technology and society to biology [1,2,3,4,5]. Among these systems, neural networks, encoding the connections between neurons of living organisms, have always been of central interest, providing information on the structural organization and functional dynamics of the nervous system. A widely known example is the neural network of the Caenorhabditis elegans worm [6,7], consisting of roughly 300-400 neurons (depending on the sex of the animal) with about 5,000-7,000 connections. A considerably larger network was reconstructed for the larva of the Drosophila [8], spanning between 3,016 neurons via roughly 5 • 10 5 synapses. However, both of these networks are dwarfed by the recent reconstruction of the brain of adult female Drosophila melanogaster [9,10], which contains 139,255 neurons linked by 5 • 10 7 chemical synapses. Given that flies are capable of navigating over distances [11], show signs of long-term memories [12], engage in social interactions [13], and exhibit a wiring diagram between brain regions similar to that of mammals [14,15], research on the fly brain offers insights that extend beyond a mere increase in neural network scale.

The fundamental network characteristics of this fascinating system have already been examined [10], uncovering a scale-free degree distribution and a distinct rich-club organisation, in which highly central neurons (hubs) are densely interconnected. In addition, specific neuronal subsets were identified that may act as signal integrators or broadcasters. In the present study, we augment these results through the use of network embeddings, aimed at arranging the neurons in metric spaces solely based on the structure of the connections. In general, network embedding techniques provide an important alternative to traditional network measures for gaining information on various properties of the analysed network [16,17,18,19]. When transforming a network into a point cloud in a metric space, the original graph structure becomes encoded in the relative coordinates of the nodes, as, for example, tightly knit communities in the network are usually mapped onto compact and dense point clusters. The node coordinates also offer utility in several areas, including the prediction of missing links, assisting in navigation over the network, and serving as input for further machine learning tasks such as node classification, community finding, etc. Moreover, as demonstrated in Refs. [20,21], access to node coordinates can significantly aid in identifying nodes which contribute to shortest paths, especially in partially incomplete networks.

Although embedding nodes into the Euclidean space might seem as an intuitive choice, the hyperbolic approach provides a compelling alternative with distinct advantages [22]. Crucially, while Euclidean algorithms often require high-dimensional embeddings, hyperbolic approaches can achieve good quality embeddings in just two dimensions. This is because the exponential volume growth of hyperbolic spheres provides greater flexibility in node placement compared to the power-law growth of Euclidean spheres [23]. The literature offers several different hyperbolic embedding algorithms, including likelihood optimization with respect to hyperbolic network models [24,25], dimension reduction of non-linear Laplacian matrices [26,27] and Lorentz matrices using the hyperboloid model [28,29], coalescent embeddings [30] (which apply dimension reduction to pre-weighted matrices capturing network structure), and mixed approaches combining dimension reduction and local optimization [31,32,33], as well as neural network-based embeddings and approaches taking advantage of spanning trees [34] or hierarchically nested communities in the network structure [35,36]. Most of these methods operate within the native representation of hyperbolic space, which in 2 dimensions, is often referred to as the native disk.

Nevertheless, most of the hyperbolic embedding methods above can not be scaled up to networks as large as the wiring diagram of the Drosophila melanogaster brain [9,10], because of their substantial computational demands. As a result, a hyperbolic map of this system has remained unavailable so far. To overcome this limitation, we adopt the recently introduced Cluster-Level Optimised Vertex Embedding (CLOVE) method [36], which simultaneously delivers high embedding quality while maintaining exceptional computational efficiency. By avoiding the computational bottlenecks of alternative approaches, CLOVE makes it possible to construct a faithful hyperbolic embedding of the Drosophila melanogaster brain connectome. In parallel, we examine Euclidean embeddings produced by the similarly ef

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut