Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Non-stationary/Unstable Linear Systems
Stabilization of non-stationary linear systems over noisy communication channels is considered. Stochastically stable sources, and unstable but noise-free or bounded-noise systems have been extensively studied in information theory and control theory literature since 1970s, with a renewed interest in the past decade. There have also been studies on non-causal and causal coding of unstable/non-stationary linear Gaussian sources. In this paper, tight necessary and sufficient conditions for stochastic stabilizability of unstable (non-stationary) possibly multi-dimensional linear systems driven by Gaussian noise over discrete channels (possibly with memory and feedback) are presented. Stochastic stability notions include recurrence, asymptotic mean stationarity and sample path ergodicity, and the existence of finite second moments. Our constructive proof uses random-time state-dependent stochastic drift criteria for stabilization of Markov chains. For asymptotic mean stationarity (and thus sample path ergodicity), it is sufficient that the capacity of a channel is (strictly) greater than the sum of the logarithms of the unstable pole magnitudes for memoryless channels and a class of channels with memory. This condition is also necessary under a mild technical condition. Sufficient conditions for the existence of finite average second moments for such systems driven by unbounded noise are provided.
💡 Research Summary
The paper tackles the long‑standing problem of stabilizing unstable (non‑stationary) linear systems when the control signal must be transmitted over a noisy discrete‑time communication channel. While earlier work focused mainly on deterministic or mean‑square stability and on memoryless channels, this work provides a unified, information‑theoretic characterization that applies to multi‑dimensional systems, Gaussian process noise, and channels with memory and feedback.
The authors consider a linear system
x_{t+1}=A x_t + w_t, w_t ∼ 𝒩(0,Σ_w),
where A may have eigenvalues λ_i with |λ_i|>1 (the unstable poles). The state is encoded and sent through a discrete channel described by a conditional distribution P(y_t|u^t, y^{t‑1}) that can capture memory (e.g., ARMA noise) and causal feedback. The central question is: under what channel conditions does the closed‑loop system exhibit long‑run probabilistic stability?
Four notions of stochastic stability are examined: (i) recurrence (the state returns infinitely often to a bounded set), (ii) asymptotic mean stationarity (AMS), (iii) sample‑path ergodicity, and (iv) existence of a finite second‑moment (E‖x_t‖²<∞). The main result is a sharp threshold expressed solely in terms of the Shannon capacity C of the channel and the unstable eigenvalues of A:
C > ∑_{|λ_i|>1} log |λ_i| ⇔ AMS (and therefore ergodicity) of the state process.
For memoryless channels this condition coincides with the classic “data‑rate > log |λ|” rule, but the authors prove it remains both sufficient and, under a mild technical assumption (essentially that the channel output distribution is sufficiently continuous in the input), necessary for a broad class of channels with memory.
The proof technique is novel. The authors construct a state‑dependent, random‑time quantization and coding scheme. The state space is partitioned into concentric shells; when the state lies in a large‑radius shell a higher instantaneous transmission rate is requested, while in inner shells a lower rate suffices. This adaptive scheme yields a Markov chain (state, quantization index) whose drift can be bounded by a Lyapunov‑type inequality:
E