Return probability and k-step measures
The notion of return probability -- explored most famously by George P '{o}lya on d-dimensional lattices -- has potential as a measure for the analysis of networks. We present an efficient method for
The notion of return probability – explored most famously by George P'{o}lya on d-dimensional lattices – has potential as a measure for the analysis of networks. We present an efficient method for finding return probability distributions for connected undirected graphs. We argue that return probability has the same discriminatory power as existing k-step measures – in particular, beta centrality (with negative beta), the graph-theoretical power index (GPI), and subgraph centrality. We compare the running time of our algorithm to beta centrality and subgraph centrality and find that it is significantly faster. When return probability is used to measure the same phenomena as beta centrality, it runs in linear time – O(n+m), where n and m are the number of nodes and edges, respectively – which takes much less time than either the matrix inversion or the sequence of matrix multiplications required for calculating the exact or approximate forms of beta centrality, respectively. We call this form of return probability the P'{o}lya power index (PPI). Computing subgraph centrality requires an expensive eigendecomposition of the adjacency matrix; return probability runs in half the time of the eigendecomposition on a 2000-node network. These performance improvements are important because computationally efficient measures are necessary in order to analyze large networks.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...