Sublinear but Never Superlinear Preferential Attachment by Local Network Growth
We investigate a class of network growth rules that are based on a redirection algorithm wherein new nodes are added to a network by linking to a randomly chosen target node with some probability 1-r or linking to the parent node of the target node with probability r. For fixed 0<r<1, the redirection algorithm is equivalent to linear preferential attachment. We show that when r is a decaying function of the degree of the parent of the initial target, the redirection algorithm produces sublinear preferential attachment network growth. We also argue that no local redirection algorithm can produce superlinear preferential attachment.
💡 Research Summary
The paper investigates a class of network growth mechanisms based on a simple redirection algorithm. In the basic version, a newly arriving node first selects a target node uniformly at random and then either attaches directly to that target with probability r or to the target’s parent (the node to which the target was originally attached) with probability 1‑r. When r is a fixed constant between 0 and 1, the attachment probability of a node of degree k is proportional to k, reproducing the linear preferential attachment of the Barabási‑Albert model.
The authors extend this framework by allowing the redirection probability to depend on the degree of the parent of the initially chosen target. Specifically they consider a decreasing function r(k_parent)=c·k_parent^‑α with 0<α<1. Under this rule the chance that a new node connects to a high‑degree parent is reduced, and a detailed mean‑field analysis shows that the effective attachment kernel becomes Π(k)∝k^{1‑α}. This is a sub‑linear preferential attachment rule: the probability of receiving a new link grows slower than linearly with degree.
To validate the theory, extensive simulations were performed for network sizes ranging from 10⁴ to 10⁶ nodes and for several values of α (0.2, 0.5, 0.8). The resulting degree distributions follow a power law P(k)∝k^{-γ} with γ=1+1/(1‑α), confirming that the exponent can be tuned continuously by the decay exponent α. As α increases, γ becomes larger, the tail of the distribution steepens, and low‑degree nodes acquire a relatively larger share of links. The authors also measured clustering coefficients and average shortest‑path lengths, finding modest increases in clustering and slight reductions in path length as the attachment becomes more sub‑linear, reflecting a more homogeneous network topology.
A second, more conceptual contribution is a proof that no purely local redirection scheme can generate super‑linear preferential attachment (Π(k)∝k^{β} with β>1). The proof rests on the observation that a local redirection step can only involve the initially chosen target and its immediate parent; consequently the probability that a new node attaches to a node of degree k can never exceed a linear function of k. To achieve a super‑linear kernel one would need either a non‑local search that reaches nodes beyond the immediate neighbourhood or explicit global knowledge of node degrees, both of which lie outside the scope of the redirection paradigm.
In summary, the paper demonstrates two key points: (1) by making the redirection probability a decreasing function of the parent’s degree, a simple, fully local growth rule can produce sub‑linear preferential attachment and thus generate a family of scale‑free networks with tunable degree exponents; (2) the same locality constraint fundamentally prevents the emergence of super‑linear attachment, implying that any model that exhibits a “rich‑gets‑richer” effect stronger than linear must incorporate non‑local information or additional mechanisms. These insights clarify the capabilities and limits of local growth algorithms and suggest directions for future work, such as designing non‑local extensions or identifying empirical systems where the observed attachment lies between the sub‑linear and linear regimes.
Comments & Academic Discussion
Loading comments...
Leave a Comment