Effect of Memory on the Dynamics of Random Walks on Networks
Pathways of diffusion observed in real-world systems often require stochastic processes going beyond first-order Markov models, as implicitly assumed in network theory. In this work, we focus on second-order Markov models, and derive an analytical expression for the effect of memory on the spectral gap and thus, equivalently, on the characteristic time needed for the stochastic process to asymptotically reach equilibrium. Perturbation analysis shows that standard first-order Markov models can either overestimate or underestimate the diffusion rate of flows across the modular structure of a system captured by a second-order Markov network. We test the theoretical predictions on a toy example and on numerical data, and discuss their implications for network theory, in particular in the case of temporal or multiplex networks.
💡 Research Summary
The paper addresses a fundamental limitation of traditional network diffusion models, which typically assume first‑order (memoryless) Markov dynamics. Empirical observations in domains such as human mobility, web traffic, and livestock movement reveal that the next step of a flow often depends on the previous location, i.e., the process exhibits memory. To capture this effect analytically, the authors focus on second‑order Markov processes by constructing a “memory network” in which each undirected edge of the original physical network is replaced by two directed memory nodes representing ordered pairs of consecutive visits (i→j and j→i). Random walks on the physical nodes become ordinary Markov walks on the memory nodes.
The dynamics are encoded in a transition matrix (T_{\alpha\beta}) that gives the probability of moving from memory node (\alpha) (which records the previous step) to memory node (\beta) (which records the current step). For a continuous‑time Poisson walk the probability vector obeys (\dot{P}= -L P) with the normalized Laplacian (L = I - T). In the baseline first‑order case the transition probabilities are uniform over all out‑neighbors, yielding a simple Laplacian whose second smallest eigenvalue (the spectral gap) is (\lambda_2 = 1/2). This eigenvalue governs the mixing time ( \tau_{\text{mix}} = 1/\text{Re}(\lambda_2) ) and is tightly linked to the modular (bi‑community) structure of the network.
Memory is introduced as a small perturbation (\Delta T) to the baseline matrix: (T = T^{M} + \Delta T). Conservation of probability forces each row sum of (\Delta T) to zero, while individual entries can be positive (promoted transitions) or negative (suppressed transitions). Applying first‑order perturbation theory to the eigenvalue problem yields a compact expression for the change in the spectral gap:
\
Comments & Academic Discussion
Loading comments...
Leave a Comment