Identifiability in Graphical Discrete Lyapunov Models

Identifiability in Graphical Discrete Lyapunov Models
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this paper, we study discrete Lyapunov models, which consist of steady-state distributions of first-order vector autoregressive models. The parameter matrix of such a model encodes a directed graph whose vertices correspond to the components of the random vector. This combinatorial framework naturally allows for cycles in the graph structure. We focus on the fundamental problem of identifying the entries of the parameter matrix. In contrast to the classical setting, we assume non-Gaussian error terms, which allows us to use the higher-order cumulants of the model. In this setup, we show generic identifiability for directed acyclic graphs with self-loops at each vertex and show how to express the parameters as a rational function of the cumulants. Furthermore, we establish local identifiability for all directed graphs containing self loops at each vertex and no isolated vertices. Finally, we provide first results on the defining equations of the models, showing model equivalence for certain graphs and paving the way towards structure learning.


💡 Research Summary

This paper investigates the identifiability of parameters in discrete‑time vector autoregressive (VAR(1)) models when the stationary distribution is described by higher‑order cumulants rather than just the covariance matrix. The authors consider a non‑Gaussian error setting, which allows the use of third‑ and fourth‑order cumulants to recover the underlying parameter matrix (A) and the error cumulants (\Omega^{(n)}).

First, they derive explicit tensor‑valued “discrete Lyapunov equations” for any order (n). For a Schur‑stable matrix (A) and independent, identically distributed errors, the (n)‑th order cumulant of the stationary distribution satisfies
\


Comments & Academic Discussion

Loading comments...

Leave a Comment