Estimating time-varying networks

Estimating time-varying networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Stochastic networks are a plausible representation of the relational information among entities in dynamic systems such as living cells or social communities. While there is a rich literature in estimating a static or temporally invariant network from observation data, little has been done toward estimating time-varying networks from time series of entity attributes. In this paper we present two new machine learning methods for estimating time-varying networks, which both build on a temporally smoothed $l_1$-regularized logistic regression formalism that can be cast as a standard convex-optimization problem and solved efficiently using generic solvers scalable to large networks. We report promising results on recovering simulated time-varying networks. For real data sets, we reverse engineer the latent sequence of temporally rewiring political networks between Senators from the US Senate voting records and the latent evolving regulatory networks underlying 588 genes across the life cycle of Drosophila melanogaster from the microarray time course.


💡 Research Summary

The paper tackles the problem of inferring networks whose topology changes over time from longitudinal observations of node attributes. While a substantial body of work exists for estimating static, time‑invariant graphs (e.g., Graphical Lasso, static $l_1$‑regularized logistic regression), relatively little has been done to capture the continuous evolution of edges in real‑world dynamic systems such as cellular regulatory circuits or political alliances.

To fill this gap, the authors propose two closely related machine‑learning frameworks that both rely on a temporally smoothed $l_1$‑regularized logistic regression formulation. For each time point $t$, a logistic model predicts the presence of an edge $(i,j)$ from the observed attributes of node $i$ (or $j$). The key innovation is the addition of a temporal penalty that encourages the adjacency matrix $\Theta^{(t)}$ to be similar to its predecessor $\Theta^{(t-1)}$. The overall objective can be written as

\


Comments & Academic Discussion

Loading comments...

Leave a Comment