Vertex routing models
A class of models describing the flow of information within networks via routing processes is proposed and investigated, concentrating on the effects of memory traces on the global properties. The long-term flow of information is governed by cyclic attractors, allowing to define a measure for the information centrality of a vertex given by the number of attractors passing through this vertex. We find the number of vertices having a non-zero information centrality to be extensive/sub-extensive for models with/without a memory trace in the thermodynamic limit. We evaluate the distribution of the number of cycles, of the cycle length and of the maximal basins of attraction, finding a complete scaling collapse in the thermodynamic limit for the latter. Possible implications of our results on the information flow in social networks are discussed.
💡 Research Summary
The paper introduces a class of dynamical network models called “vertex routing models” that describe how information propagates through a graph by means of local routing decisions at each node. Each vertex i possesses a routing function f_i that maps an incoming signal to one of its neighboring vertices j∈N(i). Two variants of the routing rule are studied. In the “memory‑less” version the choice of j is drawn independently and uniformly at random at every time step, so the routing function changes arbitrarily from one step to the next. In the “memory” version the vertex remembers the neighbor it previously selected for a given input; whenever the same input recurs the vertex repeats the same choice. This memory trace creates a deterministic component in the otherwise stochastic dynamics and dramatically reduces the size of the state space.
The global dynamics of the system can be represented as a finite‑state Markov chain whose transition matrix is built from the collection of routing functions. In the long‑time limit the chain settles into a set of cyclic attractors (closed directed cycles) together with their basins of attraction. The authors define a new centrality measure, “information centrality” C(i), as the number of distinct attractors that pass through vertex i. Unlike traditional static centralities (degree, betweenness, eigenvector), C(i) directly reflects the dynamical pathways that information actually follows.
To explore the statistical properties of these models the authors run extensive simulations on Erdős–Rényi random graphs with sizes N ranging from 10³ to 10⁶, averaging over 10⁴ independent realizations for each N. They record (i) the fraction ρ(N) of vertices with non‑zero information centrality, (ii) the total number M(N) of attractors, (iii) the distribution P(L) of attractor lengths L, and (iv) the distribution P(S_max) of the size of the largest basin of attraction.
Key findings are:
-
Extensivity of information centrality. In the memory‑less model ρ(N) scales as ~N⁻¹, so the number of vertices with C(i)>0 grows sub‑linearly and vanishes in the thermodynamic limit. In contrast, the memory model yields a constant ρ≈0.35 for large N, indicating that a finite fraction of nodes remain dynamically relevant (extensive behavior).
-
Number of attractors. M(N) grows only logarithmically with N in the memory‑less case, whereas in the memory case it scales roughly linearly, reflecting the proliferation of distinct cycles when past choices are retained.
-
Attractor length distribution. For memory‑less routing, P(L) decays exponentially, and the mean length grows as O(log N). With memory, P(L) follows a power‑law tail (exponent ≈2.1), and the mean length scales as O(N), meaning that long cycles become typical.
-
Largest basin scaling. The probability density of the maximal basin size S_max obeys a finite‑size scaling form P(S_max)≈N⁻ᵝ f(S_max/N) with β≈1.0. After rescaling S_max by N, data for all system sizes collapse onto a universal curve f(x), demonstrating a robust scaling law for the size of the dominant attraction region.
The authors relate the memory‑less case to the classic random mapping problem, for which analytical results on cycle statistics are known. The memory model, however, introduces self‑similar constraints that prevent a direct analytic treatment, making the observed universal scaling especially noteworthy.
In the discussion, the authors argue that the memory‑based routing captures essential features of real social communication: people tend to repeat interactions with familiar contacts, leading to persistent pathways of information flow. Vertices with high information centrality correspond to individuals who repeatedly appear in many communication cycles, i.e., “influencers” or “opinion leaders.” Conversely, in environments where interactions are essentially random and memoryless (e.g., one‑off broadcasts or transient advertising), such influential nodes are scarce and information quickly dissipates.
The paper concludes that incorporating a simple memory trace into routing dynamics fundamentally alters the macroscopic behavior of information flow on networks. The introduced notion of information centrality provides a dynamic, path‑dependent measure of node importance, and its statistical properties—extensivity, power‑law cycle lengths, and universal basin‑size scaling—offer new quantitative tools for the study of complex systems ranging from communication networks to epidemiology and marketing.
Comments & Academic Discussion
Loading comments...
Leave a Comment