Signal Processing over Time-Varying Graphs: A Systematic Review

Signal Processing over Time-Varying Graphs: A Systematic Review
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

As irregularly structured data representations, graphs have received a large amount of attention in recent years and have been widely applied to various real-world scenarios such as social, traffic, and energy settings. Compared to non-graph algorithms, numerous graph-based methodologies benefited from the strong power of graphs for representing high-dimensional and non-Euclidean data. In the field of Graph Signal Processing (GSP), analogies of classical signal processing concepts, such as shifting, convolution, filtering, and transformations are developed. However, many GSP techniques usually postulate the graph is static in both signal and typology. This assumption hinders the effectiveness of GSP methodologies as the assumption ignores the time-varying properties in numerous real-world systems. For example, in the traffic network, the signal on each node varies over time and contains underlying temporal correlation and patterns worthy of analysis. To tackle this challenge, more and more work are being done recently to investigate the processing of time-varying graph signals. They cope with time-varying challenges from three main directions: 1) graph time-spectral filtering, 2) multi-variate time-series forecasting, and 3) spatiotemporal graph data mining by neural networks, where non-negligible progress has been achieved. Despite the success of signal processing and learning over time-varying graphs, there is no survey to compare and conclude the current methodology for GSP and graph learning. To compensate for this, in this paper, we aim to review the development and recent progress on signal processing and learning over time-varying graphs, and compare their advantages and disadvantages from both the methodological and experimental side, to outline the challenges and potential research directions for future research.


💡 Research Summary

The paper presents a comprehensive survey of signal processing and learning methods for time‑varying graphs (TVGs), bridging the traditionally separate fields of Graph Signal Processing (GSP) and Graph Neural Networks (GNNs). It begins by highlighting the limitation of most existing GSP techniques, which assume a static graph structure, and motivates the need for frameworks that can handle both evolving node signals and changing topologies. TVGs are classified into three main categories: (1) static‑spatial‑temporal graphs (STGs) where only node signals vary over time, (2) discrete‑time dynamic graphs (DTDGs) represented as a sequence of graph snapshots, and (3) continuous‑time dynamic graphs (CTDGs) defined by event‑driven updates.

The survey revisits core GSP concepts—graph Laplacian, Graph Fourier Transform (GFT), spectral filtering, and wavelet constructions—and explains how these operations extend to each TVG type. For STGs, a product‑graph approach enables joint spatial‑temporal filtering while keeping the Laplacian fixed. In DTDGs and CTDGs, the Laplacian must be recomputed or updated for every topology change, which leads to distinct algorithmic strategies.

A major contribution is the explicit mapping between spectral GCN layers and classical GSP filters. By interpreting learnable GCN weights as spectral filter coefficients (often approximated with Chebyshev polynomials), the authors show how spatial GNN convolutions are essentially adaptive, data‑driven versions of GSP filters. This connection clarifies why many GNN architectures inherit desirable properties such as locality and stability from GSP theory.

The paper then surveys representative methods for each TVG class. STG‑focused models include Graph WaveNet, STGCN, and DCRNN, which combine temporal convolutions or recurrent units with graph‑based spectral filters. DTDG approaches such as Snapshot‑GNN, EvolveGCN, and DynGEM treat each snapshot as a separate graph and learn temporal dependencies across snapshots. CTDG techniques—Temporal Graph Networks (TGN), TGAT, and Continuous‑Time GCN—operate on event streams, using memory modules and time encodings to handle irregular intervals and node/edge insertions or deletions. The authors compare these methods in terms of prediction accuracy, computational complexity, real‑time capability, and scalability.

A thorough overview of benchmark datasets (e.g., traffic flow PEMS, power consumption, social media Reddit, brain connectivity ABIDE) and evaluation metrics (MAE, RMSE, MAPE, classification accuracy, graph‑specific measures) is provided, giving readers a practical guide for experimental comparison.

Finally, the survey identifies open challenges: lack of efficient spectral tools for large‑scale continuous‑time graphs, limited robustness to missing or noisy data, insufficient joint modeling of topology evolution and signal dynamics, and the scarcity of privacy‑preserving, federated learning frameworks for TVGs. Future research directions include multi‑scale spectral filtering, co‑learning of graph structure and node signals, integration of reinforcement learning for adaptive graph updates, and development of standardized, diverse benchmark suites.

Overall, the paper offers a unified perspective that connects GSP theory with modern GNN practice, clarifies methodological trade‑offs, and outlines a roadmap for advancing signal processing and machine learning on dynamic graph data.


Comments & Academic Discussion

Loading comments...

Leave a Comment