An adaptive data sampling strategy for stabilizing dynamical systems via controller inference
Learning stabilizing controllers from data is an important task in engineering applications; however, collecting informative data is challenging because unstable systems often lead to rapidly growing or erratic trajectories. In this work, we propose an adaptive sampling scheme that generates data while simultaneously stabilizing the system to avoid instabilities during the data collection. Under mild assumptions, the approach provably generates data sets that are informative for stabilization and have minimal size. The numerical experiments demonstrate that controller inference with the novel adaptive sampling approach learns controllers with up to one order of magnitude fewer data samples than unguided data generation. The results show that the proposed approach opens the door to stabilizing systems in edge cases and limit states where instabilities often occur and data collection is inherently difficult.
💡 Research Summary
The paper addresses a fundamental challenge in data‑driven control of unstable nonlinear dynamical systems: collecting informative data without allowing the system to diverge into uninformative, potentially dangerous trajectories. Traditional data‑collection strategies rely on random excitation signals that guarantee persistent excitation but ignore the specific task of stabilization, often leading to excessive sample complexity and numerical instability when the underlying system is unstable.
To overcome these limitations, the authors propose an adaptive data‑sampling scheme that simultaneously stabilizes the system while generating data. The core theoretical foundation is the concept of data informativity, which asks whether a given data set (U, X, Y) contains enough information to solve a particular control task—in this case, state‑feedback stabilization. For linear systems, Proposition 1 states that if the state data matrix X has full row rank and there exists a right inverse X† such that Y X† has only stable eigenvalues, then a stabilizing feedback K = U X† can be constructed. Equivalent linear matrix inequality (LMI) conditions using a matrix Θ are also provided for both continuous‑ and discrete‑time models.
Because many high‑dimensional systems evolve on low‑dimensional manifolds, the authors extend informativity to a subspace V of dimension r ≪ N (Proposition 2). By projecting the data onto V (bX = VᵀX, bY = VᵀY) and applying the same informativity conditions, a low‑dimensional feedback bK can be inferred. The full‑dimensional controller is then recovered as K = bK Vᵀ. This projection dramatically reduces the required number of samples: only O(r) data points are needed to guarantee stabilization, instead of O(N).
Algorithm 1 operationalizes this idea. Starting from an initial high‑dimensional data triplet (Uₙ, Xₙ, Yₙ) and an orthogonal basis V (e.g., obtained via singular‑value decomposition of Xₙ − x̄), the algorithm: (1) forms reduced data (U, bX, bY); (2) solves for Θ that satisfies the appropriate LMI, yielding bK = U Θ (bX Θ)⁻¹; (3) lifts bK back to the original space. Crucially, the controller obtained at each iteration is used to generate the next batch of data, ensuring that the system remains (locally) stable during sampling. The authors prove that under mild assumptions on the input signals—specifically, that they are “stable informative inputs”—the adaptive procedure converges in a finite number of steps to a controller that stabilizes the full nonlinear system (via linearization around the equilibrium).
The paper provides two extensive numerical experiments. The first involves a 39‑bus power‑grid model. Random excitation requires several thousand samples to obtain a stabilizing controller, whereas the adaptive method achieves stabilization with roughly 300–400 samples, reducing the sample size by an order of magnitude. The second experiment tackles laminar flow behind an obstacle, modeled with a high‑dimensional finite‑element discretization (≈10⁴ states). By extracting a 20‑dimensional subspace via SVD and applying the adaptive scheme, only about 2 000 samples are needed, compared with tens of thousands for a non‑adaptive approach. In both cases, the learned controllers not only stabilize the system but also exhibit good performance on unseen initial conditions.
In summary, the contributions of the paper are:
- Introduction of an adaptive sampling framework that stabilizes the system during data acquisition, preventing divergence and ensuring data quality.
- Extension of data‑informativity theory to low‑dimensional subspaces, enabling controller inference with minimal sample complexity.
- A concrete algorithm that iteratively refines the controller and the data set, with provable convergence under mild conditions.
- Demonstration on realistic high‑dimensional benchmarks, showing up to ten‑fold reduction in required data.
The authors suggest future directions such as automated online subspace selection, robust extensions to handle model uncertainties, and real‑time implementation of the adaptive scheme. Overall, the work offers a compelling pathway toward efficient, safe, and theoretically grounded data‑driven control of unstable nonlinear systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment