Learning the Influence Graph of a Markov Process that Randomly Resets to the Past
Learning the influence graph G of a high-dimensional Markov process is central to many application domains, including social networks, neuroscience, and financial risk analysis. However, in many of these applications, future states of the process are occasionally and unpredictably influenced by a distant past state, thus destroying the Markovianity. To study this practical issue, we propose the past influence model (PIM), which captures the occasional “random resets to past” by modifying the Markovian dynamics in [1], which, in turn, is a non-linear generalization of the dynamics studied in [2], [3]. The recursive greedy algorithm proposed in this paper recovers any bounded degree $G$ when the number of ``jumps back in time" is order-wise smaller than the total number of samples, and the algorithm does not require memory.
💡 Research Summary
**
The paper addresses a fundamental limitation of existing influence‑graph learning methods for high‑dimensional stochastic processes: they assume strict Markovian dynamics, i.e., the next state depends only on the current state. In many real‑world systems—social media, opinion dynamics, financial markets—occasionally a distant past state can abruptly re‑appear and drive the future evolution. To capture this phenomenon, the authors introduce the Past Influence Model (PIM). In PIM, at each discrete time step a Bernoulli trial with success probability p decides whether the process follows the usual Markov transition (with probability p) or “teleports” to a state d steps in the past (with probability 1 − p). The state update for node v is a convex combination of an intrinsic term, a weighted sum of its neighbors’ current observations, and a weighted sum of the same neighbors’ observations d steps ago when a reset occurs. Formally,
\
Comments & Academic Discussion
Loading comments...
Leave a Comment