Computing Networks: A General Framework to Contrast Neural and Swarm Cognitions

This paper presents the Computing Networks (CNs) framework. CNs are used to generalize neural and swarm architectures. Artificial neural networks, ant colony optimization, particle swarm optimization,

Computing Networks: A General Framework to Contrast Neural and Swarm   Cognitions

This paper presents the Computing Networks (CNs) framework. CNs are used to generalize neural and swarm architectures. Artificial neural networks, ant colony optimization, particle swarm optimization, and realistic biological models are used as examples of instantiations of CNs. The description of these architectures as CNs allows their comparison. Their differences and similarities allow the identification of properties that enable neural and swarm architectures to perform complex computations and exhibit complex cognitive abilities. In this context, the most relevant characteristics of CNs are the existence multiple dynamical and functional scales. The relationship between multiple dynamical and functional scales with adaptation, cognition (of brains and swarms) and computation is discussed.


💡 Research Summary

The paper introduces the Computing Networks (CN) framework as a unifying formalism for a broad class of distributed computational systems, specifically targeting artificial neural networks (ANNs) and swarm‑based optimization algorithms such as ant colony optimization (ACO) and particle swarm optimization (PSO). A CN is defined by four elementary components: (1) nodes, which represent processing units (neurons, ants, particles, etc.); (2) edges or connections, which encode interaction pathways (synaptic weights, pheromone trails, velocity‑position coupling); (3) state variables, which capture the instantaneous internal condition of each node (membrane potential, location, velocity); and (4) transition rules, which dictate how states evolve over time (activation functions, probabilistic movement rules, update equations). By mapping each of the selected architectures onto this schema, the authors demonstrate that seemingly disparate systems share a common structural backbone.

A central contribution of the work is the explicit identification of three hierarchical dynamical scales. The microscopic scale concerns the local, per‑node updates driven by immediate neighbors; the mesoscopic scale aggregates groups of nodes into functional modules (hidden layers, sub‑colonies) and governs information exchange among them; the macroscopic scale captures the emergent global behavior, such as convergence to a solution or the formation of a stable attractor. These temporal scales are tightly coupled with functional scales, which correspond to the specific computational goals (classification, regression, path finding, resource allocation). The paper argues that the interplay between multiple dynamical and functional scales is the key to the rich computational and cognitive capabilities observed in both brains and swarms.

Adaptation is dissected into two complementary mechanisms. Structural adaptation modifies the parameters of the network itself—synaptic weights in ANNs, pheromone intensities in ACO, inertia or acceleration coefficients in PSO—typically through learning rules such as back‑propagation or pheromone update equations. Dynamic adaptation refers to transient state changes that do not alter the underlying parameters, such as neuronal spiking or the instantaneous movement of an ant. The authors show that the co‑existence of these mechanisms across scales enables systems to both explore new solutions and exploit accumulated knowledge, a hallmark of intelligent behavior.

From a computational‑complexity perspective, the authors note that while node‑level operations are linear in the number of nodes (O(N)), the cost of mesoscopic and macroscopic interactions can grow polynomially or even exponentially depending on network topology, feedback strength, and the richness of the functional landscape. Consequently, efficient CN design requires careful balancing of information flow, feedback loops, and modularity to avoid combinatorial explosion while preserving the capacity for emergent cognition.

The paper concludes by highlighting the practical implications of the CN framework. It provides a principled blueprint for constructing hybrid AI systems that combine deep learning’s hierarchical feature extraction with swarm intelligence’s global search capabilities. Moreover, because CNs can also model realistic biological neural circuits (e.g., spiking neuron networks) and robotic swarms using the same language, the framework fosters cross‑disciplinary dialogue among AI researchers, neuroscientists, and roboticists. In essence, Computing Networks reveal that the essential ingredients for complex computation and cognition are not tied to a specific substrate but to the organization of multiple interacting scales and adaptive mechanisms—a insight that may guide the next generation of intelligent, adaptable systems.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...