Robust Adaptive Learning Control for a Class of Non-affine Nonlinear Systems
We address the tracking problem for a class of uncertain non-affine nonlinear systems with high relative degrees, performing non-repetitive tasks. We propose a rigorously proven, robust adaptive learning control scheme that relies on a gradient descent parameter adaptation law to handle the unknown time-varying parameters of the system, along with a state estimator that estimates the unmeasurable state variables. Furthermore, despite the inherently complex nature of the non-affine system, we provide an explicit iterative computation method to facilitate the implementation of the proposed control scheme. The paper includes a thorough analysis of the performance of the proposed control strategy, and simulation results are presented to demonstrate the effectiveness of the approach.
💡 Research Summary
The paper tackles the tracking problem for uncertain discrete‑time nonlinear systems whose input appears non‑affinely in the dynamics and that possess a high relative degree ρ. Unlike most iterative learning control (ILC) works that assume affine dynamics, repetitive tasks, or rely on dynamic linearization or neural‑network approximations, the authors develop a direct adaptive ILC (AILC) framework that can handle non‑repetitive reference trajectories, time‑varying unknown parameters, bounded disturbances, and high‑order dynamics.
The system is modeled as
xₖ(t+ρ)=θ(t)ᵀ f(Xₖ(t), uₖ(t)) + wₖ(t),
where θ(t)∈ℝᵖ is an unknown, possibly time‑varying parameter vector, f(·) is a known smooth mapping, Xₖ(t) collects the last ρ states, and wₖ(t) is a bounded disturbance. Four standard assumptions are imposed: (1) a non‑vanishing input gain (|θᵀ∂f/∂u|>d₀>0), (2) bounded disturbance, (3) global Lipschitz continuity of f, and (4) known bounded initial ρ states.
Feasibility analysis – Under zero disturbance, the authors prove that for any iteration k and time t there exists a unique ideal control input uₖ(t) satisfying the exact tracking condition θ(t)ᵀf(Xₖ(t), uₖ(t))=rₖ(t+ρ). This result follows from the global implicit function theorem and a constructive contraction‑mapping argument. Because u*ₖ(t) is defined implicitly, it cannot be expressed in closed form.
Adaptive law – To deal with the unknown θ(t) and the unmeasured components of Xₖ(t), a state estimator (not detailed in the excerpt) provides estimates of the hidden states, while a gradient‑descent‑based parameter update minimizes the cost
J(θ̂)=‖xₖ(t+ρ)−θ̂ᵀf(Xₖ(t), uₖ(t))‖²/(mₖ(t)²),
with mₖ(t)=√(1+‖f‖²). The basic gradient step is augmented with a dead‑zone function aₖ(t) and a projection operator onto the known compact set B(θ,R) to guarantee robustness against disturbances and to keep the estimate bounded. The resulting update equations (4)–(8) constitute the GDP‑A (gradient descent parameter adaptation) law.
Numerical computation of the control input – The implicit control law is solved iteratively using a contraction mapping
T(u)=u−λ Z(u), Z(u)=θ̂ₖ(t)ᵀf(X̂ₖ(t), u)−rₖ(t+ρ),
with a suitably chosen λ>0. Starting from an arbitrary initial guess, repeated application of T produces a sequence converging to the unique fixed point ûₖ(t). The authors derive explicit bounds on the approximation error δₖ(t)=ûₖ(t)−u*ₖ(t) as a function of λ, the contraction constant, and the number of iterations, and they analyze how this error propagates into the tracking error.
Stability and convergence – By combining the parameter adaptation law, the state estimator, and the numerically approximated control input, the authors prove that the tracking error eₖ(t)=xₖ(t)−rₖ(t) converges to zero when wₖ(t)=0, and otherwise remains bounded within a ball whose radius depends linearly on the disturbance supremum w, the relative degree ρ, and the control‑input approximation error. The analysis explicitly quantifies the trade‑off between computational effort (number of contraction‑mapping iterations) and tracking accuracy.
Simulation results – Two benchmark systems are simulated: (i) a second‑order strict‑feedback system (ρ=2) with sinusoidal, iteration‑varying references and periodically varying parameters; (ii) a third‑order system (ρ=3) with more complex nonlinearities and random bounded disturbances. In both cases the proposed AILC outperforms previously reported dynamic‑linearization‑based ILC and neural‑network‑based ILC, achieving significantly lower steady‑state errors and demonstrating robustness to parameter variations and disturbances.
Contributions – The paper makes three major contributions: (1) a direct AILC scheme for a class of non‑affine nonlinear systems with high relative degree, avoiding the need for dynamic linearization or neural‑network approximations; (2) a rigorous feasibility proof based on the implicit function theorem and a practical contraction‑mapping algorithm for computing the implicit control law; (3) a comprehensive robustness analysis that links tracking performance to disturbance magnitude, system relative degree, and numerical approximation error.
Overall, the work provides a solid theoretical foundation and a practically implementable algorithm for adaptive learning control of complex non‑affine systems, expanding the applicability of ILC techniques to a broader set of real‑world engineering problems.
Comments & Academic Discussion
Loading comments...
Leave a Comment