Control Lyapunov Functions for Optimality in Sontag-Type Control
Given a Control Lyapunov Function (CLF), Sontag’s famous Formula provides a nonlinear state-feedback guaranteeing asymptotic stability of the setpoint. At the same time, a cost function that depends on the CLF is minimized. While there exist methods to construct CLFs for certain classes of systems, the impact on the resulting performance is unclear. This article aims to make two contributions to this problem: (1) We show that using the value function of an LQR design as CLF, the resulting Sontag-type controller minimizes a classical quadratic cost around the setpoint and a CLF-dependent cost within the domain where the CLF condition holds. We also show that the closed-loop system is stable within a local region at least as large as that generated by the LQR. (2) We show a related CLF design for feedback-linearizable systems resulting in a global CLF in a straight-forward manner; The Sontag design then guarantees global asymptotic stability while minimizing a quadratic cost at the setpoint and a CLF-dependent cost in the whole state-space. Both designs are constructive and easily applicable to nonlinear multi-input systems under mild assumptions.
💡 Research Summary
The paper addresses the longstanding tension between achieving a large region of attraction (ROA) and optimal performance for nonlinear control systems. It builds on Sontag’s formula, which guarantees asymptotic stability when a control‑Lyapunov function (CLF) is available, and investigates how the choice of CLF influences the associated cost functional.
The first contribution shows that if the value function of a linear‑quadratic regulator (LQR) design—namely (V(x)=\tfrac12 x^{\top}Px) where (P) solves the algebraic Riccati equation—is used as a CLF for the original nonlinear system, the Sontag‑type feedback reduces exactly to the LQR law (u=-R^{-1}B^{\top}Px) in a neighborhood of the equilibrium. The authors prove that the scalar factor (\lambda(x)) appearing in the Sontag controller equals one, which implies that the controller minimizes the standard quadratic cost (\int_0^\infty (x^{\top}Qx+u^{\top}Ru),dt) locally. Moreover, they compare the ROA guaranteed by the LQR with that obtained from the Sontag controller using the same CLF. By a Taylor‑expansion argument they demonstrate that any sub‑level set of (V) where the LQR yields (\dot V<0) is also contained in the region where the Sontag controller guarantees (\dot V<0). Consequently, the Sontag‑type design inherits at least the LQR’s local ROA and often enlarges it.
The second contribution extends the idea to globally feedback‑linearizable systems. By applying a global diffeomorphism (z=T(x)) that brings the system into a linear form (\dot z=\tilde A z+\tilde B(\psi(z)+\gamma(z)u)), the authors construct a global CLF that is quadratic in the transformed coordinates: (V(z)=\tfrac12 z^{\top}\tilde P z). The matrix (\tilde P) is chosen to align locally with the LQR‑derived (P) via (\tilde P = (\partial T/\partial x|{0})^{-\top}P(\partial T/\partial x|{0})^{-1}). This ensures that, near the origin, the same local optimality result holds, while globally the CLF condition is satisfied everywhere, guaranteeing global asymptotic stability under the Sontag feedback.
Algorithm 1 summarizes the practical steps: linearize the system, verify stabilizability, select (Q,R), solve the Riccati equation, and compute the Sontag feedback using the constructed CLF. Numerical simulations on the classic inverted pendulum illustrate that the proposed controller matches the LQR’s cost near the setpoint, yet provides a noticeably larger ROA compared with both the pure LQR and a conventional feedback‑linearizing controller.
Overall, the paper provides a clear, constructive methodology for turning an LQR value function into a CLF that yields a Sontag‑type controller with provable local optimality and enhanced stability regions. It further shows how to lift this approach to globally stabilizing designs for feedback‑linearizable systems, thereby bridging the gap between Lyapunov‑based stability and optimal control in a practically implementable way.
Comments & Academic Discussion
Loading comments...
Leave a Comment