📝 Original Info
- Title: Rethinking Multi-Agent Intelligence Through the Lens of Small-World Networks
- ArXiv ID: 2512.18094
- Date: 2025-12-19
- Authors: Researchers from original ArXiv paper
📝 Abstract
Large language models (LLMs) have enabled multi-agent systems (MAS) in which multiple agents argue, critique, and coordinate to solve complex tasks, making communication topology a first-class design choice. Yet most existing LLM-based MAS either adopt fully connected graphs, simple sparse rings, or ad-hoc dynamic selection, with little structural guidance. In this work, we revisit classic theory on small-world (SW) networks and ask: what changes if we treat SW connectivity as a design prior for MAS? We first bridge insights from neuroscience and complex networks to MAS, highlighting how SW structures balance local clustering and long-range integration. Using multi-agent debate (MAD) as a controlled testbed, experiment results show that SW connectivity yields nearly the same accuracy and token cost, while substantially stabilizing consensus trajectories. Building on this, we introduce an uncertainty-guided rewiring scheme for scaling MAS, where long-range shortcuts are added between epistemically divergent agents using LLM-oriented uncertainty signals (e.g., semantic entropy). This yields controllable SW structures that adapt to task difficulty and agent heterogeneity. Finally, we discuss broader implications of SW priors for MAS design, framing them as stabilizers of reasoning, enhancers of robustness, scalable coordinators, and inductive biases for emergent cognitive roles.
💡 Deep Analysis
Deep Dive into Rethinking Multi-Agent Intelligence Through the Lens of Small-World Networks.
Large language models (LLMs) have enabled multi-agent systems (MAS) in which multiple agents argue, critique, and coordinate to solve complex tasks, making communication topology a first-class design choice. Yet most existing LLM-based MAS either adopt fully connected graphs, simple sparse rings, or ad-hoc dynamic selection, with little structural guidance. In this work, we revisit classic theory on small-world (SW) networks and ask: what changes if we treat SW connectivity as a design prior for MAS? We first bridge insights from neuroscience and complex networks to MAS, highlighting how SW structures balance local clustering and long-range integration. Using multi-agent debate (MAD) as a controlled testbed, experiment results show that SW connectivity yields nearly the same accuracy and token cost, while substantially stabilizing consensus trajectories. Building on this, we introduce an uncertainty-guided rewiring scheme for scaling MAS, where long-range shortcuts are added between ep
📄 Full Content
Rethinking Multi-Agent Intelligence Through the Lens of
Small-World Networks
Boxuan Wang, Zhuoyun Li, Xiaowei Huang, Yi Dong∗
School of Computer Science and Informatics, University of Liverpool
Liverpool, United Kingdom
Abstract
Large language models (LLMs) have enabled multi-agent systems
(MAS) in which multiple agents argue, critique, and coordinate to
solve complex tasks, making communication topology a first-class
design choice. Yet most existing LLM-based MAS either adopt fully
connected graphs, simple sparse rings, or ad-hoc dynamic selection,
with little structural guidance. In this work, we revisit classic theory
on small-world (SW) networks and ask: what changes if we treat SW
connectivity as a design prior for MAS? We first bridge insights from
neuroscience and complex networks to MAS, highlighting how
SW structures balance local clustering and long-range integration.
Using multi-agent debate (MAD) as a controlled testbed, experi-
ment results show that SW connectivity yields nearly the same
accuracy and token cost, while substantially stabilizing consensus
trajectories. Building on this, we introduce an uncertainty-guided
rewiring scheme for scaling MAS, where long-range shortcuts are
added between epistemically divergent agents using LLM-oriented
uncertainty signals (e.g., semantic entropy). This yields controllable
SW structures that adapt to task difficulty and agent heterogene-
ity. Finally, we discuss broader implications of SW priors for MAS
design, framing them as stabilizers of reasoning, enhancers of ro-
bustness, scalable coordinators, and inductive biases for emergent
cognitive roles.
Keywords
Multi-Agent Systems, Small-World Network, Multi-Agent Debate,
Large Language Models
1
Introduction
Large language models (LLMs) have enabled a new generation of
multi-agent systems (MAS) in which multiple LLM agents inter-
act, critique, and collaborate to solve complex tasks. Frameworks
such as AutoGen [33] and CAMEL [14] demonstrate that organiz-
ing agents into conversational roles can enhance reasoning, while
benchmarks like AgentBench [17] reveal persistent limitations in
long-horizon planning, coordination, and self-correction. These
observations underscore that the structure of inter-agent communi-
cation is becoming a central design axis for multi-agent intelligence.
A growing line of work investigates how communication topol-
ogy affects collective reasoning. In multi-agent debate (MAD), a
widely studied testbed for collaborative reasoning, agents itera-
tively exchange arguments and refine their answers [6, 13, 16, 18].
Although early MAD implementations commonly adopted a fully-
connected topology, recent work challenges this assumption. Li et
al. [15] show that even a simple ring topology, where each agent
∗Corresponding Author: Yi.Dong@liverpool.ac.uk
Preprint, Under Review
P=0 (Regular Lattice)
0
…(Full text truncated)…
📸 Image Gallery
Reference
This content is AI-processed based on ArXiv data.