Small-Gain Theorem Based Distributed Prescribed-Time Convex Optimization For Networked Euler-Lagrange Systems
In this paper, we address the distributed prescribed-time convex optimization (DPTCO) for a class of networked Euler-Lagrange systems under undirected connected graphs. By utilizing position-dependent measured gradient value of local objective function and local information interactions among neighboring agents, a set of auxiliary systems is constructed to cooperatively seek the optimal solution. The DPTCO problem is then converted to the prescribed-time stabilization problem of an interconnected error system. A prescribed-time small-gain criterion is proposed to characterize prescribed-time stabilization of the system, offering a novel approach that enhances the effectiveness beyond existing asymptotic or finite-time stabilization of an interconnected system. Under the criterion and auxiliary systems, innovative adaptive prescribed-time local tracking controllers are designed for subsystems. The prescribed-time convergence lies in the introduction of time-varying gains which increase to infinity as time tends to the prescribed time. Lyapunov function together with prescribed-time mapping are used to prove the prescribed-time stability of closed-loop system as well as the boundedness of internal signals. Finally, theoretical results are verified by one numerical example.
💡 Research Summary
**
This paper tackles the problem of distributed convex optimization for a class of networked Euler‑Lagrange systems (NELS) under undirected connected graphs, with the additional requirement that all agents reach the optimal solution within a user‑specified finite time, known as prescribed‑time convergence. The authors first formulate each robot’s dynamics in the standard Euler‑Lagrange form
(M_i(q_i)\ddot q_i + C_i(q_i,\dot q_i)\dot q_i = \tau_i),
where the inertia matrix (M_i) is positive definite, the Coriolis‑centripetal matrix (C_i) satisfies the usual skew‑symmetry property, and unknown constant parameters (\theta_i) appear linearly through a regression matrix (\Omega_i).
The global objective is to minimize (\sum_{i=1}^N f_i(y_i)) subject to consensus constraints (y_i = y_j) for all agents. Each local cost (f_i) is assumed to be strongly convex with Lipschitz continuous gradients, but only the measured gradient (d f_i(y_i)/d y_i) (i.e., a position‑dependent quantity) is available to the controller; the analytical expression of (\nabla f_i) is not required. This measurement‑only assumption reduces communication load and makes the approach applicable to scenarios where gradient information can only be sensed (e.g., source‑seeking with radiation sensors).
To achieve the prescribed‑time goal, the authors introduce a set of auxiliary dynamics (z_i) for each agent. These auxiliary systems use the measured gradients and local relative position information (\chi_i = \sum_{j\in\mathcal N_i}(y_j - y_i)) to generate a cooperative search direction. The auxiliary dynamics are scaled by a time‑varying gain (\mu(t)) belonging to class (K_T); a typical choice is (\mu(t) = (T/(T + t_0 - t))^m) with (m\ge 1). As (t) approaches the prescribed horizon (T + t_0), (\mu(t)) diverges to infinity, forcing the auxiliary states to converge to the optimal point exactly at the prescribed instant.
Because the auxiliary systems are tightly coupled with the physical Euler‑Lagrange dynamics, the authors apply a coordinate transformation that defines error variables (\xi_i = y_i - z_i) (tracking error) and (\eta_i = z_i - \bar z) (consensus error of the auxiliary states). This yields an interconnected error system consisting of an inner loop (tracking) and an outer loop (consensus). Classical small‑gain theorems, which guarantee stability when the product of subsystem gains is less than one, cannot be applied directly because the gains (\mu(t)) become unbounded near the prescribed time.
Consequently, the paper proposes a prescribed‑time small‑gain criterion. The criterion states that if each subsystem admits a Lyapunov function (V_i) satisfying
(\dot V_i \le -\alpha_i(\mu(t)) V_i + \beta_i(|w_i|))
with (\alpha_i) a class (\mathcal K_\infty) function and (\beta_i) capturing inter‑subsystem coupling, then the overall Lyapunov function (V = \sum_i V_i) obeys
(\dot V \le -\gamma(\mu(t)) V)
for some positive (\gamma). Since (\mu(t)) diverges, the integral (\int_{t_0}^{T+t_0} \gamma(\mu(s)) ds = \infty), guaranteeing that (V(t)) (and thus all error signals) decay to zero exactly at the prescribed time. This result extends traditional small‑gain theory to the prescribed‑time domain, providing a systematic way to handle time‑varying, potentially unbounded gains.
To cope with parametric uncertainties in the Euler‑Lagrange models, adaptive laws are incorporated for each agent:
(\dot{\hat\theta}_i = -\Gamma_i \Omega_i^\top(q_i,\dot q_i,\dot y_i,\ddot y_i) e_i),
where (e_i = y_i - z_i) is the tracking error and (\Gamma_i) is a positive definite adaptation gain matrix. The adaptive term appears in the Lyapunov analysis, ensuring that the parameter estimation error remains bounded and does not jeopardize the prescribed‑time convergence.
The authors prove three main properties:
- Prescribed‑time convergence – All agents’ outputs reach the unique optimal consensus point (y^*) at (t = T + t_0) and remain there for all later times.
- Uniform boundedness – The states (y_i), velocities (\dot q_i), and control inputs (\tau_i) are shown to belong to (L_\infty) over the entire time horizon, despite the presence of the diverging gain (\mu(t)).
- Robustness to uncertainties – Adaptive estimation compensates for unknown constant parameters, and the Lyapunov‑based proof guarantees stability even when the regression matrix (\Omega_i) is only partially known.
A numerical example with five Euler‑Lagrange agents arranged on a ring graph validates the theory. Each agent possesses a distinct quadratic cost (f_i(y_i) = y_i^\top P_i y_i) with random positive‑definite matrices (P_i). The prescribed time is set to (T = 5) seconds. Simulation results show that all agents’ positions converge to the same optimal point within exactly five seconds, the control inputs stay within reasonable limits, and the adaptive estimates settle to constant values.
In conclusion, the paper makes three notable contributions:
- It introduces a measurement‑only gradient framework that minimizes communication and is applicable to realistic sensing scenarios.
- It develops a novel prescribed‑time small‑gain theorem that bridges the gap between finite‑time/fixed‑time control and asymptotic stability for interconnected nonlinear systems.
- It designs adaptive prescribed‑time tracking controllers for Euler‑Lagrange agents, guaranteeing both fast convergence and bounded internal signals.
The results open avenues for fast, guaranteed‑time cooperative optimization in multi‑robot systems, autonomous vehicle fleets, and distributed energy resources, where rapid consensus on optimal decisions is critical. Future work suggested includes extending the approach to inequality constraints, time‑delays, packet losses, and non‑convex cost functions.
Comments & Academic Discussion
Loading comments...
Leave a Comment