Connectivity Structure of Systems
In this paper, we consider to what degree the structure of a linear system is determined by the system’s input/output behavior. The structure of a linear system is a directed graph where the vertices represent the variables in the system and an edge (x,y) exists if x directly influences y. In a number of studies, researchers have attempted to identify such structures using input/output data. Thus, our main aim is to consider to what degree the results of such studies are valid. We begin by showing that in many cases, applying a linear transformation to a system will change the system’s graph. Furthermore, we show that even the graph’s components and their interactions are not determined by input/output behavior. From these results, we conclude that without further assumptions, very few aspects, if any, of a system’s structure are determined by its input/output relation. We consider a number of such assumptions. First, we show that for a number of parameterizations, we can characterize when two systems have the same structure. Second, in many applications, we can use domain knowledge to exclude certain interactions. In these cases, we can assume that a certain variable x does not influence another variable y. We show that these assumptions cannot be sufficient to identify a system’s parameters using input/output data. We conclude that identifying a system’s structure from input/output data may not be possible given only assumptions of the form x does not influence y.
💡 Research Summary
The paper investigates the fundamental question of whether the graph‑based structure of a linear dynamical system—i.e., the directed network that records which variables directly influence which others—can be uniquely recovered from input‑output (i/o) data alone. The authors begin by formalizing a linear time‑invariant (LTI) state‑space model (A, B, C, D) and defining its associated graph: vertices correspond to inputs, states, and outputs, and a directed edge (x → y) exists whenever variable y is directly affected by x (non‑zero entries in A, B, C, D).
The first major result shows that applying an arbitrary invertible linear transformation T to the state vector (producing a new realization (A′=TAT⁻¹, B′=TB, C′=CT⁻¹, D′=D)) leaves the i/o transfer function unchanged but generally changes the underlying graph. Consequently, two systems that are i/o equivalent can possess different structural graphs, demonstrating that i/o behavior alone does not determine the graph.
The authors then explore weaker graph equivalence notions (homomorphism, quasi‑isomorphism, etc.) hoping to find a relation preserved under linear transformations. All such candidates either fail to be true equivalence relations or lead to pathological identifications where distinct structures become indistinguishable. Even when the graph is “condensed” by collapsing each strongly connected component into a single node, linear transformations can still produce different condensed graphs.
Given these negative findings, the paper proceeds to examine what additional assumptions might enable structure identification. The first assumption restricts attention to minimal single‑input‑single‑output (SISO) systems in a canonical form. Under this restriction, the authors derive necessary and sufficient conditions for two realizations to share the same graph, effectively characterizing when structure is uniquely determined. However, this result applies only to a very narrow class of systems.
The second assumption incorporates domain knowledge in the form of explicit non‑influence constraints: certain edges are known a priori to be absent (e.g., “x does not directly affect y”). The authors prove that such constraints, by themselves, are insufficient to guarantee unique identification of the system parameters from i/o data. In particular, if any state variable influences an output (the typical case in realistic models), the system cannot be uniquely identified even when a full set of non‑influence edges is supplied.
Further, the paper provides a graph‑theoretic characterization of those graphs for which “almost all” systems satisfying the constraints are minimal. These graphs must lack edges from state variables to outputs; otherwise minimality—and thus identifiability—fails for a generic set of parameter values.
The discussion connects these theoretical insights to several applied domains. In neuroscience, Dynamic Causal Modeling (DCM) uses bilinear state‑space models driven by experimental inputs; the authors note that setting the bilinear terms to zero yields a linear system whose output equals the state, a situation where the earlier impossibility results apply. In genetics, piecewise‑linear models of gene expression can be reduced to linear subsystems, again falling under the same limitations. For vector autoregressive (VAR) models, the paper shows that when the coefficient matrices and noise covariance are block‑diagonal, the resulting graph consists of disconnected subgraphs, matching the absence of Granger causality between the corresponding variable groups. However, the general relationship between VAR‑based Granger causality and the graph defined in this work remains unclear.
In summary, the authors demonstrate that without strong additional assumptions, the structure of a linear system is largely indeterminate from i/o data alone. Minimality, SISO canonical forms, or explicit non‑influence constraints provide only limited, highly specialized avenues for identification. Consequently, many contemporary data‑driven methods for inferring connectivity—whether in brain imaging, systems biology, or econometrics—lack a rigorous guarantee of recovering the true underlying graph. The paper calls for caution in interpreting such inferred structures and suggests that future work must incorporate richer prior information or alternative experimental designs to achieve reliable structural identification.
Comments & Academic Discussion
Loading comments...
Leave a Comment