Interpretable Dynamic Network Modeling of Tensor Time Series via Kronecker Time-Varying Graphical Lasso
With the rapid development of web services, large amounts of time series data are generated and accumulated across various domains such as finance, healthcare, and online platforms. As such data often co-evolves with multiple variables interacting with each other, estimating the time-varying dependencies between variables (i.e., the dynamic network structure) has become crucial for accurate modeling. However, real-world data is often represented as tensor time series with multiple modes, resulting in large, entangled networks that are hard to interpret and computationally intensive to estimate. In this paper, we propose Kronecker Time-Varying Graphical Lasso (KTVGL), a method designed for modeling tensor time series. Our approach estimates mode-specific dynamic networks in a Kronecker product form, thereby avoiding overly complex entangled structures and producing interpretable modeling results. Moreover, the partitioned network structure prevents the exponential growth of computational time with data dimension. In addition, our method can be extended to stream algorithms, making the computational time independent of the sequence length. Experiments on synthetic data show that the proposed method achieves higher edge estimation accuracy than existing methods while requiring less computation time. To further demonstrate its practical value, we also present a case study using real-world data. Our source code and datasets are available at https://github.com/Higashiguchi-Shingo/KTVGL.
💡 Research Summary
The paper introduces Kronecker Time‑Varying Graphical Lasso (KTVGL), a novel framework for dynamic network inference on tensor‑valued time series. Traditional dynamic graphical models such as Time‑Varying Graphical Lasso (TVGL) assume multivariate data; when applied to tensors they require flattening, which destroys multi‑modal structure, inflates dimensionality, and yields a single massive, entangled network that is hard to interpret and computationally expensive. KTVGL addresses these issues by modeling each non‑temporal mode of the tensor (e.g., keywords, countries) with its own sparse precision matrix Θ^{(m)}t and representing the overall precision matrix as the Kronecker product K_t = ⊗{m=1}^M Θ^{(m)}_t. This Kronecker structure reduces the number of free parameters from O(∏_m d_m^2) to O(∑_m d_m^2), dramatically lowering memory and computational demands while preserving mode‑specific relationships.
Mathematically, the tensor time series X ∈ ℝ^{T×d_1×…×d_M} is viewed as a sequence of M‑order tensors X_t. By unfolding X_t along mode m, the authors define an empirical covariance ˆS^{(m)}t that aggregates variability across all other modes after whitening them with their current precision estimates. Lemma 1 shows that the trace term tr(ˆS_t K_t) in the log‑likelihood decomposes into D{\m}·tr(ˆS^{(m)}_t Θ^{(m)}_t), enabling the original non‑convex objective to be rewritten as a sum of M independent TVGL‑like sub‑problems, each involving only Θ^{(m)}_t. The full optimization problem combines (i) a Gaussian log‑likelihood term, (ii) an ℓ₁ off‑diagonal penalty for sparsity, and (iii) a temporal consistency penalty ψ (e.g., Laplacian for smooth evolution or ℓ₁ for sparse edge changes) for each mode.
Because each sub‑problem is convex, the authors employ an alternating optimization scheme: fix all but one mode’s precision matrices and solve the resulting TVGL problem using the well‑established ADMM algorithm. This cycle repeats until convergence, guaranteeing the same convergence properties as standard TVGL. The approach naturally supports parallelization across modes.
A streaming extension, KTVGL‑Stream, incrementally updates the precision matrices as new time points arrive, making computational cost independent of the total sequence length.
Empirical evaluation includes (1) synthetic experiments where KTVGL achieves up to 73.5 % higher AUC‑ROC for edge recovery and up to 60.5 × faster runtime compared with state‑of‑the‑art dynamic network methods, and (2) a real‑world case study on Google Trends data (time × keyword × country). In the case study, mode‑specific networks reveal interpretable patterns: the country network captures regional co‑interest dynamics, while the keyword network uncovers topic‑level correlations that are obscured in a flattened representation.
In summary, KTVGL offers three decisive advantages: (a) it preserves tensor structure and reduces parameter explosion via a Kronecker product formulation, (b) it yields interpretable, mode‑wise dynamic networks, and (c) it scales to high‑dimensional data and streaming scenarios. These contributions advance the state of dynamic network modeling for multi‑modal time series and open avenues for applications such as anomaly detection, personalized recommendation, and multi‑domain forecasting.
Comments & Academic Discussion
Loading comments...
Leave a Comment