LUNES: Agent-based Simulation of P2P Systems (Extended Version)

LUNES: Agent-based Simulation of P2P Systems (Extended Version)
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We present LUNES, an agent-based Large Unstructured NEtwork Simulator, which allows to simulate complex networks composed of a high number of nodes. LUNES is modular, since it splits the three phases of network topology creation, protocol simulation and performance evaluation. This permits to easily integrate external software tools into the main software architecture. The simulation of the interaction protocols among network nodes is performed via a simulation middleware that supports both the sequential and the parallel/distributed simulation approaches. In the latter case, a specific mechanism for the communication overhead-reduction is used; this guarantees high levels of performance and scalability. To demonstrate the efficiency of LUNES, we test the simulator with gossip protocols executed on top of networks (representing peer-to-peer overlays), generated with different topologies. Results demonstrate the effectiveness of the proposed approach.


💡 Research Summary

The paper introduces LUNES (Large Unstructured NEtwork Simulator), an agent‑based discrete‑event simulator designed to model complex peer‑to‑peer (P2P) overlays and other large‑scale unstructured networks. LUNES adopts a modular architecture that separates three fundamental phases: (i) network topology creation, (ii) protocol simulation, and (iii) performance evaluation. This separation enables straightforward integration of external tools such as igraph for graph generation and GraphViz for visualisation, by means of simple template files, thus allowing users to plug in alternative topology generators or analysis utilities without modifying the core simulator.

The simulation engine is built on top of the AR​T​IS middleware, which supplies the basic services required for both sequential and parallel/distributed simulation (synchronisation, event scheduling, message passing, etc.). On top of AR​T​IS, the GAIA framework provides higher‑level abstractions for agent‑based modelling. Each simulated entity (SME) is an autonomous agent that exchanges timestamped messages with other agents. GAIA’s distinctive feature is its dynamic clustering and load‑balancing mechanism. During execution, the system continuously monitors the communication pattern of each SME; when a heuristic detects that a set of SMEs frequently interact, it migrates them to the same execution unit (CPU core or node). By co‑locating highly interactive agents, the mechanism dramatically reduces inter‑process communication latency and bandwidth consumption, which are the dominant sources of overhead in Parallel and Distributed Simulation (PADS). The migration cost (state transfer, bookkeeping) is kept low, and the algorithm also respects load‑balancing constraints so that no single CPU becomes a bottleneck.

LUNES is a complete redesign of the authors’ earlier tool, PaScaS (Parallel and Distributed Scale‑free Network Simulator), which was limited to scale‑free topologies, used a static partitioning scheme, and offered modest extensibility. By leveraging the newer GAIA version, LUNES supports arbitrary topologies, dynamic re‑partitioning, and a more user‑friendly API based on object‑oriented concepts rather than low‑level C APIs. The simulator can run in pure sequential mode or exploit multi‑core and cluster environments; each module (topology generator, protocol engine, trace analyser) can operate concurrently, further improving resource utilisation.

The experimental evaluation focuses on gossip‑based data dissemination protocols executed over several network families: random graphs, Barabási‑Albert scale‑free graphs, and small‑world networks. Two gossip variants (a push‑based “flushed” protocol and a round‑robin scheme) are tested on networks ranging from 10⁴ to 10⁵ nodes. Results show that, when the clustering mechanism is enabled, parallel runs achieve speed‑ups of 4×–8× compared with the sequential baseline, while maintaining comparable or lower memory footprints. Even in scale‑free graphs, where a few high‑degree hubs could cause severe communication imbalance, the adaptive clustering successfully groups hub‑centric interactions, preventing the typical performance degradation observed in many PADS implementations. Metrics reported include Wall‑Clock Time (WCT), CPU utilisation, and memory consumption, all indicating that LUNES scales efficiently with both network size and the number of processing elements.

In summary, LUNES contributes three major advances to the field of large‑scale network simulation: (1) a clean, modular pipeline that eases integration of external graph‑generation and analysis tools; (2) a middleware‑driven, agent‑based engine equipped with dynamic clustering and load‑balancing to minimise communication overhead in parallel/distributed settings; and (3) demonstrated applicability to a variety of topologies and protocols, achieving substantial speed‑ups on realistic P2P workloads. The authors plan future extensions such as support for dynamic topology changes, more sophisticated protocols (e.g., blockchain consensus), and richer user documentation, aiming to make LUNES a versatile platform for researchers studying complex networked systems.


Comments & Academic Discussion

Loading comments...

Leave a Comment