Percolation in self-similar networks
We provide a simple proof that graphs in a general class of self-similar networks have zero percolation threshold. The considered self-similar networks include random scale-free graphs with given expected node degrees and zero clustering, scale-free graphs with finite clustering and metric structure, growing scale-free networks, and many real networks. The proof and the derivation of the giant component size do not require the assumption that networks are treelike. Our results rely only on the observation that self-similar networks possess a hierarchy of nested subgraphs whose average degree grows with their depth in the hierarchy. We conjecture that this property is pivotal for percolation in networks.
💡 Research Summary
The paper tackles a fundamental question in network science: under what conditions does a giant connected component (GCC) emerge when edges are randomly retained with probability p (bond percolation)? Classical results, rooted in the configuration model and its tree‑like approximation, assert that a non‑zero percolation threshold pc exists whenever the second moment of the degree distribution is finite. However, many real‑world systems—social media, the Internet, biological interaction maps—exhibit high clustering, metric constraints, or growth mechanisms that violate the treelike assumption. The authors therefore propose a more universal approach based on self‑similarity, a structural property that many complex networks share.
1. Definition of self‑similarity
A graph G is called self‑similar with respect to a transformation T if repeatedly applying T yields a nested sequence of subgraphs
G = G0 ⊃ G1 = T(G0) ⊃ G2 = T(G1) ⊃ … .
Crucially, the average degree ⟨k⟩i of Gi must increase monotonically with i. In practice T can be a degree‑threshold filter, a spatial‑distance filter, or a growth rule that removes the newest nodes. The authors argue that this monotonic growth captures the “core‑periphery” organization observed in scale‑free networks: deeper subgraphs are richer in high‑degree hubs.
2. Classes of networks covered
The framework embraces four broad families:
- Random scale‑free graphs with prescribed expected degrees and zero clustering (the Chung‑Lu model). Here T selects nodes whose expected degree exceeds a cutoff.
- Scale‑free graphs with finite clustering and an underlying metric space. Nodes are embedded in Euclidean space; connection probability decays with distance but is amplified by node fitness (degree).
- Growing scale‑free networks (e.g., duplication‑divergence or preferential‑attachment with node replication). The growth rule itself generates a hierarchy of self‑similar subgraphs.
- Empirical networks (Internet autonomous systems, Twitter follower graphs, protein‑protein interaction maps). Empirical analysis shows that after appropriate filtering, the average degree of the remaining subgraph rises with filter stringency, confirming self‑similarity.
3. Proof that the percolation threshold is zero
For any fixed p > 0, consider the percolation process on Gi. Because each edge of Gi survives with probability p, the effective average degree in the percolated subgraph is p · ⟨k⟩i. Since ⟨k⟩i grows without bound as i → ∞, there always exists a depth i* such that p · ⟨k⟩i* > 1. In classical percolation theory, the condition ⟨k⟩ > 1 guarantees the existence of a GCC in an infinite graph. Hence, regardless of how small p is, a sufficiently deep subgraph will percolate, implying that the global network percolates as soon as p > 0. Consequently, the percolation threshold pc = 0 for any self‑similar network satisfying the monotonic‑degree condition.
4. Size of the giant component
The authors derive an explicit expression for the relative size S(p) of the GCC without invoking a tree approximation. By treating each Gi as an independent percolation problem and using the inclusion‑exclusion principle across the hierarchy, they obtain
S(p) = 1 − ∏_{i=0}^{∞}
Comments & Academic Discussion
Loading comments...
Leave a Comment