A numerical solution to the minimum-time control problem for linear discrete-time systems
The minimum-time control problem consists in finding a control policy that will drive a given dynamic system from a given initial state to a given target state (or a set of states) as quickly as possible. This is a well-known challenging problem in optimal control theory for which closed-form solutions exist only for a few systems of small dimensions. This paper presents a very generic solution to the minimum-time problem for arbitrary discrete-time linear systems. It is a numerical solution based on sparse optimization, that is the minimization of the number of nonzero elements in the state sequence over a fixed control horizon. We consider both single input and multiple inputs systems. An important observation is that, contrary to the continuous-time case, the minimum-time control for discrete-time systems is not necessarily entirely bang-bang.
💡 Research Summary
This paper addresses the classic minimum‑time control problem for discrete‑time linear systems, where the goal is to drive the state from a given non‑zero initial condition to a target (typically the origin) in the smallest possible number of steps while respecting bounded control constraints. Unlike continuous‑time systems, where Pontryagin’s Minimum‑Maximum Principle guarantees a bang‑bang optimal policy, discrete‑time systems do not necessarily exhibit such a structure; optimal controls may assume intermediate values.
The authors propose a novel viewpoint: the minimum‑time trajectory is the sparsest possible state sequence over a fixed horizon T (i.e., it contains the largest number of zero‑state vectors, all placed at the end of the sequence). Directly minimizing the ℓ₀‑norm of the vector of state norms is combinatorial and intractable. To obtain a tractable formulation, they replace the ℓ₀‑norm with a weighted sum of ℓ₂‑norms, \
Comments & Academic Discussion
Loading comments...
Leave a Comment