Two Bayesian Approaches to Dynamic Gaussian Bayesian Networks with Intra- and Inter-Slice Edges

Two Bayesian Approaches to Dynamic Gaussian Bayesian Networks with Intra- and Inter-Slice Edges
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Gaussian Dynamic Bayesian Networks (GDBNs) are a widely used tool for learning network structures from continuous time-series data. To capture both time-lagged and contemporaneous dependencies, advanced GDBNs allow for dynamic inter-slice edges as well as static intra-slice edges. In the literature, two Bayesian modeling approaches have been developed for GDBNs. Both build on and extend the well-known Gaussian BGe score. We refer to them as the mean-adjusted BGe (mBGe) and the extended BGe (eBGe) models. In this paper, we contrast the two models and compare their performance empirically. The main finding of our study is that the two models induce different equivalence classes of network structures. In particular, the equivalence classes implied by the eBGe model are non-standard, and we propose a new variant of the DAG-to-CPDAG algorithm to identify them. To the best of our knowledge, these non-standard equivalence classes have not been previously reported.


💡 Research Summary

This paper provides a systematic comparison of two Bayesian scoring approaches for Gaussian Dynamic Bayesian Networks (GDBNs), namely the mean‑adjusted BGe (mBGe) and the extended BGe (eBGe). Both methods build on the classic BGe score for static Gaussian Bayesian networks, but they differ in how they incorporate intra‑slice (static) and inter‑slice (dynamic) dependencies.

The authors first review the traditional BGe score, which uses a conjugate Normal‑Wishart prior on the mean vector and precision matrix, yielding a closed‑form marginal likelihood that can be factorized over parent‑child families in a DAG. They then describe how GDBNs extend this framework by adding a dynamic graph (GD) that contains lag‑1 edges and a static graph (GS) that must remain acyclic.

In the mBGe formulation, dynamic edges affect the mean vector through a multivariate linear regression: the mean at time t is a linear function of the lagged variables, while the covariance matrix remains constant across time. A Gaussian prior on the regression coefficients allows the coefficients to be integrated out analytically, resulting in a “mean‑adjusted” BGe score that can be evaluated using the standard BGe machinery on the residuals after subtracting the time‑varying mean. Consequently, the static structure can be learned with existing CPDAG extraction tools (e.g., bnlearn) without modification.

The eBGe approach, by contrast, augments the variable set to include both current‑time and lagged variables in a single enlarged multivariate Gaussian model. This joint modeling treats dynamic edges as ordinary edges in the enlarged graph, which means the dynamic graph is not subject to acyclicity constraints. As a result, the presence of dynamic edges breaks the usual equivalence‑class properties (same skeleton, same v‑structures) that underlie the standard DAG‑to‑CPDAG conversion. The authors prove that eBGe induces non‑standard equivalence classes and propose a new algorithm to recover the correct CPDAG under eBGe. The algorithm modifies the usual CPDAG construction by (1) preserving the skeleton, (2) redefining v‑structures to account for dynamic edges, and (3) forcing the orientation of edges that are constrained by the dynamic component.

Empirical evaluation uses both synthetic data, where the true DAG and dynamic edges are known, and real‑world time‑series (economic and climate datasets). For each model, Markov chain Monte Carlo sampling is employed to explore the joint posterior over static and dynamic graphs, regression coefficients, and covariance matrices. Results show that eBGe more accurately recovers dynamic relationships and yields higher predictive log‑likelihoods, but it suffers from higher computational cost and requires the specialized CPDAG algorithm to avoid mis‑identifying equivalence classes. In contrast, mBGe achieves comparable performance on static structure recovery, runs faster, and can be used with existing software without modification.

The paper highlights several important implications. First, the two scoring schemes are not interchangeable; they generate distinct equivalence classes, so researchers must explicitly choose the model that matches their inferential goals. Second, popular R packages (bnlearn, BiDAG) currently lack correct CPDAG extraction for eBGe, limiting their applicability to dynamic networks. Third, the concept of non‑standard equivalence classes extends beyond Gaussian models to discrete DBNs with both intra‑ and inter‑slice edges, suggesting a broader relevance. Finally, the authors argue that as applications increasingly involve hybrid time‑series data (e.g., genomics, finance), the tools and theoretical insights presented here will be essential for robust structure learning in dynamic settings.


Comments & Academic Discussion

Loading comments...

Leave a Comment