Self-triggered stabilizing controllers for linear continuous-time systems
Self-triggered control is an improvement on event-triggered control methods. Unlike the latter, self-triggered control does not require monitoring the behavior of the system constantly. Instead, self-triggered algorithms predict the events at which t…
Authors: Fairouz Zobiri, Nacim Meslem, Brigitte Bidegaray-Fesquet
Abstract [Abstract]Self-triggered con trol is an impro v emen t on even t-triggered con trol metho ds. Unlik e the latter, self-triggered control do es not require monitoring the b ehavior of the system constan tly . Instead, self-triggered algorithms predict the ev en ts at whic h the con trol law has to b e updated b efore they happen, relying on system model and past information. In this w ork, we present a self-triggered v ersion of an even t-triggered con- trol metho d in which ev en ts are generated when a pseudo-Ly apuno v func- tion (PLF) associated with the system increases up to a certain limit. This approach has b een shown to considerably decrease the communi- cations betw een the controller and the plant, while main taining system stabilit y . T o predict the intersections b etw een the PLF and the upp er limit, we use a simple and fast ro ot-finding algorithm. The algorithm mixes the global conv ergence prop erties of the bisection and the fast con- v ergence prop erties of the Newton-Raphson method. Moreo v er, to ensure the conv ergence of the metho d, the initial iterate of the algorithm is found through a minimization algorithm. 1 Self-triggered stabilizing con trollers for linear con tin uous-time systems F airouz Zobiri 1,2 , Nacim Meslem 1 , and Brigitte Bidegara y-F esquet 2 1 Univ. Gr enoble A lp es, CNRS, Gr enoble INP, GIPSA-lab, 38000, Gr enoble, F r anc e 2 Univ. Gr enoble A lp es, CNRS, Gr enoble INP, LJK, 38000, Gr enoble, F r anc e Marc h 6, 2022 1 In tro duction F or a long time, the implemen tation of con tinuous-time con trol tasks on digital hardware has been tied to the so-called Shannon-Nyquist theorem. This condition requires the sampling frequency of the contin uous con trol signal to b e relativ ely high in order to av oid aliasing phenomena. This in turn requires the sensors, con troller and actuators to communicate at high sp eed, tasks that can be straining on communication channels, energy sources and pro cessing units. With the establishment of ev ent- triggered control, researchers and engineers alike realized the p ossibilit y of taking samples at a lo wer pace, pro vided the samples are non-uniformly distributed ov er time. Less samples means less interactions b etw een the differen t blo cks of the system, less demand on the comm unication c han- nels and computation resources. Ev en t-triggered control, how ever, only half-solves the problem. Even t- triggered control works by updating the control law only when the con- trolled system violates predefined conditions on its states or output. This implies monitoring the state of the system con tinuously , th us inducing the high frequency exchanges that we were trying to av oid. Monitoring the ev en t-triggering conditions might also require extra circuitry that is often difficult, if not imp ossible to build into existing plan ts. One wa y to cancel the need for constant monitoring of the state is to predict in adv ance the time instan ts at which the conditions on system b eha vior are infringed. F or this, we use the system’s model to predict the evolution of its states. Con trol strategies in whic h the times of the con trol up date are kno wn b eforehand are the topic of self-triggered con- trol, a v ariant of even t-triggered control. Self-triggered con trol is most often encountered in the framework of discrete-time systems [1], [2] [3]. 2 In [4], the even t-triggering conditions are dev elop ed in contin uous-time, whereas the next execution time is found by setting a time horizon that is divided in sub-in terv als. An ev ent is then determined by c hecking the ev en t-triggering conditions in each sub-in terv al. Con tin uous-time systems ha v e also been studied in [5], where the problem is treated as an optimal con trol problem, with the next sampling instant as a decision v ariable. The result is a non-conv ex quadratic programming problem which is then appro ximated b y a con v ex problem. In [6] and [7] the authors suggest a self-triggered control metho d that preserves the L 2 stabilit y of the sys- tem in the presence of disturbances. F urthermore, self-triggered control sc hemes hav e often b een coupled with mo del predictive con trol, as b oth use the mo del to pro ject the behavior of the system up to some future time [8], [9]. In this work, w e design a self-triggered con trol algorithm for contin uous- time linear time-inv ariant (L TI) systems. The algorithm predicts the times at which the system’s behavior will infringe some predefined p er- formance measures. W e consider that the system is functioning properly when a pseudo-Ly apuno v function (PLF) of its states is b elow a prede- fined upper b ound. The control la w is updated when the PLF reac hes this upp er b ound. Predicting the even ts analytically is a difficult task, and th us, the self-triggered control algorithm computes an approximation of the even t times via a minimization algorithm follow ed by a root-finding algorithm. The root-finding algorithm detects the intersections b etw een the PLF and the upp er limit, but needs to be prop erly initialized to con- v erge to the right v alue. T o do this, we tak e adv antage of the shape of the PLF b etw een tw o even ts; after the control is up dated, the PLF decreases for some time, reaches a minimum and then increases again. This lo cal minim um is easily computed via a minimization algorithm, and pro vides a go o d initial iterate for the ro ot-finding algorithm. This pap er is divided as follows. In Section 2, w e present the problem that we are solving and establish the mathematical formalism necessary to exp ose our metho d. Section 3 is divided into tw o parts. In the first part, we presen t the minimization algorithm and explain the motiv ation b ehind why w e need this stage. In the second part, we give the details of the root-finding algorithm. Finally , in Section 4, w e v alidate the metho d through a n umerical example. 2 Problem F orm ulation In this section, we first summarize the even t-triggered con trol algorithm in tro duced in [10] Then w e in tro duce a self-triggered algorithm that pre- dicts the ev ents generated b y this ev ent-triggered algorithm. Consider the follo wing L TI system ˙ x ( t ) = Ax ( t ) + B u ( t ) , x ( t 0 ) = x 0 . (1) 3 W e w ant to stabilize System (1) with the following control sequence u ( t k ) = − K x ( t k ) , u ( t ) = u ( t k ) , ∀ t ∈ [ t k , t k +1 ) , (2) where K is the feedbac k gain, selected suc h that the matrix A − B K is Hurwitz. The time instants t k represen t the instants at which the control la w has to be updated to satisfy predefined stabilit y or p erformance crite- ria. The ob jective of a self-triggered con trol implementation is to predict the time sequence t k , k = 0 , 1 , 2 , ... at whic h the v alue of the con trol is up dated. The closed-lo op form of System (1) can b e written in an augmented form, with augmented state ξ k ( t ) = [ x ( t ) , e k ( t )] T ∈ R 2 n in [ t k , t k +1 ), with e k ( t ) = x ( t ) − x ( t k ) ˙ ξ k ( t ) = A − B K B K A − B K B K ξ k ( t ) = : Ψ ξ k ( t ) , (3) where 0 n is the vector of zeros in R n . The system of equations (3) admits a unique solution on the in terv al [ t k , t k +1 ) ξ k ( t ) = e Ψ( t − t k ) ξ k ( t k ) , (4) where ξ k ( t k ) = x ( t k ) 0 T n T . W e define I k ( t ) as the indicator function I k ( t ) = 1 , t ∈ [ t k , t k +1 ) , 0 , otherwise . (5) Then, for all t , the state of the augmented system is giv en b y ξ ( t ) = X k ξ k ( t ) I k ( t ) , (6) with initial state ξ ( t 0 ) = x 0 0 T n T = : ξ 0 , (7) In what follows, w e designate ξ k ( t ) as ξ ( t ) when the t wo can b e distin- guished from the context. Remark 1. When t ∈ [ t k , t k +1 ), System (1) is written in closed-lo op form as ˙ x ( t ) = Ax ( t ) − B K x ( t k ), with a solution x ( t ) = ( e A ( t − t k ) − A − 1 ( e A ( t − t k ) − I ) B K ) x ( t k ), which requires A to b e non-singular. F or this reason, we ha ve chosen to work with the augmen ted system (3), whic h admits a solution for all A and do es not exclude any class of systems. The prop osed approached is then applicable to all stabilizable systems. T o determine the control sequence, w e first need to define the performance criteria that w e impose on the system. F or this, w e associate to the system 4 time threshold V ( x ( t )) t k t k +1 Figure 1: The pseudo-Ly apuno v function and the upp er limit. a positive definite, energy-lik e function of the state, that we refer to as a pseudo-Ly apuno v function or PLF and whic h takes the follo wing form V ( ξ ( t )) = ξ ( t ) T P 0 n × n 0 n × n 0 n × n ξ ( t ) ≡ ξ ( t ) T P ξ ( t ) , (8) where 0 n × n is the n × n matrix of zeros, and P is a positive definite matrix that satisfies the following inequality ( A − B K ) T P + P ( A − B K ) ≤ − λP , (9) where λ > 0. F or the control sequence given b y Equation (2) to stabilize the system, the PLF asso ciated with the system has to decrease along the tra jectories of the system. In this work, how ever, w e relax this condition and only require from the PLF to remain upper b ounded by a user-defined strictly decreasing threshold. Let the function W ( t ) b e such an upp er b ound, then, the PLF has to satisfy V ( ξ ( t )) ≤ W ( t ) . (10) The upp er b ound W ( t ) has to satisfy a few conditions. It has to b e p ositiv e, strictly decreasing in time, and to ultimately tend tow ard zero. One suitable candidate is the exp onentially decaying function of the form W ( t ) = W 0 e − α ( t − t 0 ) , (11) where W 0 ≥ V ( ξ ( t 0 )) and α > 0. The behaviors of V ( ξ ( t )) and W ( t ) are depicted in Figure 1. F urthermore, since we wan t to driv e the system tra jectory to equilibrium as fast as p ossible, and since the evolution of V ( ξ ( t )) is determined b y the ev olution of W ( t ), we w ant W ( t ) to decay to zero as fast as p ossible as w ell. The fastest possible rate of change of W ( t ) is the largest scalar λ , 5 that can be achiev ed from Inequality (9), as sho wn in [10] The largest p ossible v alue of λ is the solution of the follo wing generalized eigenv alue problem, maximize λ sub ject to ( A − B K ) T P + P ( A − B K ) ≤ − λP , P > 0 . (12) Let λ max denote the solution of Problem (12). The rate of deca y of W ( t ) can b e c hosen as 0 < α < λ max . Then, we can define the time instan ts t k as t k +1 = inf { t > t k | V ( ξ ( t )) = W ( t ) } . (13) with t 0 = 0. In the next section, we detail the pro cedure used to predict the low er b ounds of the entries of the time sequence t 1 , t 2 , ... , knowing t 0 . 3 Self-triggered Algorithm Let Z ( t ) denote the difference W ( t ) − V ( ξ ( t )). F rom Equation (13), to determine t k +1 , it suffices to determine the successive time instants at whic h the follo wing equation is v erified Z ( t ) = 0 . (14) Equation (14) depends on time and implicitly on the state ξ ( t ) whic h dep ends on time through a transition matrix as seen from Equation (4). This configuration renders Equation (14) extremely difficult, if not im- p ossible, to solve analytically . F or this reason, we prop ose a numerical solution to Equation (14), where the instant t k +1 is computed through a ro ot-finding algorithm. A numerical scheme needs an initial v alue, and our first guess would b e to initialize the ro ot-finding algorithm at instant t k in order to predict the instan t t k +1 . How ever, the instant t k is itself a root, and as a result, the algorithm fails to conv erge to t k +1 and finds t k as a solution again. Therefore, we hav e to initialize our algorithm at a later time instan t. Let ρ k denote the first time instan t at whic h the PLF reaches a lo cal minim um after the time t k . The instan t ρ k is a goo d candidate for an initial v alue, and in what follo ws, w e further justify its use in the ro ot-finding algorithm. T o do this, w e classify the evolution of the PLF betw een t w o triggering in- stan ts, t k and t k +1 , in to t wo categories. The first case, sho wn in Figure 2a, the minimum of the PLF occurs in b etw een t wo consecutive triggering in- stan ts so th at t k < ρ k < t k +1 . In this case, we can see that it is b etter to initialize our algorithm at time ρ k , which when combined with the global prop erties of the bisection metho d, av oids a conv ergence tow ard the time t k . In the second case, the PLF in tersects with the threshold b efore reac h- ing a lo cal minim um (see Figure 2b). In this case the instant ρ k offers 6 an upp er b ound on t k +1 from whic h we can w ork our wa y backw ards to reco v er the instan t t k +1 . Therefore, w e need to precede the ro ot-finding algorithm b y a minimiza- tion stage, aimed at iden tifying the time instan ts at which V ( ξ ( t )) reaches a lo cal minim um. W ( t ) V ( x ) ρ 0 t 1 t 2 t 0 (a) Case ρ k ≤ t k +1 . W ( t ) V ( x ) (b) Case ρ k > t k +1 . Figure 2: Shap e of the PLF for differen t choices of α . 3.1 Minimization Stage Once again, the complexity of the problem makes it imp ossible to synthe- size a closed form analytical solution, and we suggest a n umerical solution instead. The minimization algorithm is a modified Newton algorithm that uses t k as an initial guess to locate the minimum of V ( ξ ( t )) for t > t k . A t eac h iteration, we compute the Newton step denoted by ∆ ρ . Let ∇ t V and ∇ 2 t V denote the first and second time deriv atives of V ( ξ ( t )). Then, the Newton step is computed as ∆ ρ = −∇ t V |∇ 2 t V | . (15) The expressions of ∇ t V and ∇ 2 t V are giv en b y ∇ t V = ξ ( t ) T M L L T 0 n × n ξ ( t ) , (16) ∇ 2 t V = ξ ( t ) T Λ Γ Γ T γ ξ ( t ) , (17) 7 where ξ ( t ) is giv en b y Equation (4) and M = ( A − B K ) T P + P ( A − B K ) , L = P B K , Λ = ( A − B K ) T M + M ( A − B K ) + ( A − B K ) T L T + L ( A − B K ) , Γ = ( A − B K ) T L + M B K + LB K , γ = L T B K + K T B T L. The minimization procedure is giv en in Algorithm 1. The current iterate is denoted as ρ while the Newton step is represen ted b y ∆ ρ . The n um b er of iterations is bounded by the parameter M axI ter for safety , in case the algorithm fails to con verge. The procedure starts b y computing a New- ton step as given by Equation (15). Then, a line searc h is p erformed to scale the Newton step. The Newton step is scaled suc h that the function V decreases enough in the search direction. This step is needed b ecause Newton’s metho d for minimization is an algorithm that computes the ro ots of the first deriv ative of the function to be minimized. In our case, w e ha v e observ ed that the first deriv ativ e may con tain an extremum near the root. T aking the tangent of ∇ t V at these p oints yields unreasonable Newton steps [11] that need to b e damped. F or this reason, this metho d is sometimes referred to as the damp ed Newton’s method [12]. Once the scaling factor is found, the damp ed Newton step is tak en and a new iter- ate is found. Algorithm 1 Minimization Algorithm 1: function Minimiza tion( t k ) 2: ρ ← t k 3: while iter ≤ MaxIter do 4: ∆ ρ ← −∇ t V / |∇ 2 t V | 5: s ← 1 6: while V ( ξ ( ρ + s ∆ ρ )) − V ( ξ ( ρ )) ≥ κ 1 ∇ t V s ∆ ρ do 7: s ← β s , β ∈ (0 , 1), κ 1 ∈ (0 , 0 . 5) 8: tmp ← ρ 9: ρ ← ρ + s ∆ ρ 10: if | tmp − ρ | < tol then 11: return ρ k = ρ 12: iter + + 13: return ρ k Lines 5 through 8 of Algorithm 1 correspond to a bac ktrac king line searc h. The line search w orks as follows: a T aylor series approximation of V ( ξ ( t )) is computed, then the line search v ariable is decreased until a suitable reduction in V ( ξ ( t )) is achiev ed. The parameter κ 1 indicates the p ercent- age by whic h V ( ξ ( t )) has to decrease along the search direction. The final v alue of s is the quan tity b y whic h the Newton step is scaled, and β is the 8 fraction by which s is decreased in eac h line search iteration. Algorithm 1 terminates when the change in ρ from one iteration to the next b ecomes negligible. The algorithm’s conv ergence can be very fast, first, b ecause many time consuming operations can b e carried out offline. This is the case for matrices M , L , Γ, Λ and γ . Ev en the introduction of a backtrac king line search, which is usually a time consuming pro cedure, do es not slow do wn the algorithm. This is due to the fact that the line searc h is only p erformed when w e are far from the minimizer, but becomes unnecessary as we approach the minimal v alue. Therefore, we noticed through our exp eriments that the algorithm’s execution time is negligible compared to the length of the interv al t k +1 − t k . Remark 2. In the case of one-dimensional systems, the times at which the lo cal minima of V ( ξ ) occur can be found analytically . The analytical expression for finding ρ k and its deriv ation are given in the Appendix. When tested on numerical examples, the analytical expression and the n umerical approach return the same time sequence. 3.2 Ro ot-finding Algorithm Since we wan t our ro ot-finding algorithm to be both fast and precise, w e select an algorithm that combines Newton’s metho d and the bisection metho d. The bisection metho d is a globally conv ergen t metho d that acts as a safeguard against failures of the algorithm when we are far from the ro ot. Newton’s algorithm, on the other hand, has a quadratic con vergence rate near the ro ot and is used to sp eed up the algorithm. T o b e able to use the bisection, we need to lo cate the ro ot within an in terv al, that we denote [ t min , t max ]. This is a simple enough task once w e kno w the time instant ρ k . As explained earlier, t k +1 can occur either b efore or after the time instant ρ k . Either case is iden tified by computing Z ( ρ k ); if Z ( ρ k ) > 0, then t k +1 > ρ k , whereas if Z ( ρ k ) < 0, t k +1 < ρ k . W e then define t wo time instan ts t 1 and t 2 , we set t 1 = ρ k and we follow the appropriate pro cedure • Case t k +1 > ρ k : W e pick a parameter θ > 0. W e suggest to scale the v alue of θ on the time lapse ρ k − t k . The scaling factor κ 2 is c hosen betw een 0 and 0 . 5, depending on how crude w e w ant the searc h to be, resulting in θ = κ 2 ( ρ k − t k ). Then, starting from t 2 = t 1 + θ , w e keep increasing t 2 b y a v alue θ until Z ( t 2 ) < 0. This procedure is depicted in Figure 3a. Finally , w e find t min = t 1 and t max = t 2 . • Case t k +1 < ρ k : In this case, w e pick θ = − κ 2 ( ρ k − t k ). Starting from t 2 = t 1 + θ , and while Z ( t 2 ) < 0, θ is decreased b y a factor of 2 and t 2 is decreased b y a v alue θ . This pro cedure is depicted in Figure 3b. W e keep dividing θ by 2 to a void the situation t 2 < t k when the searc h is too crude. Then, w e set t min = t 2 and t max = t 1 . The pre-pro cessing stage is syn thesized in Algorithm 2. 9 t k +1 > ρ k t k ρ k θ > 0 (a) Case ρ k < t k +1 t k +1 < ρ k t k ρ k θ < 0 θ 2 θ 4 (b) Case ρ k > t k +1 Figure 3: Lo cating the ro ot inside an in terv al. Algorithm 2 In terv al Finding 1: function Pre-processing( t k ) 2: t 1 ← ρ k 3: if Z ( t 1 ) < 0 then 4: θ ← − κ 2 ( ρ k − t k ) , 0 < κ 2 ≤ 0 . 5 5: else 6: θ ← κ 2 ( ρ k − t k ) 7: t 2 ← ρ k + θ 8: while Z ( t 1 ) Z ( t 2 ) ≥ 0 do 9: t 2 ← t 2 + θ 10: if t 2 ≤ t k then 11: t 2 ← t 2 − θ , θ ← θ / 2 12: t 2 ← t 2 + θ 13: t max ← max( t 1 , t 2 ) 14: t min ← t max − | θ | 15: return t min , t max The ro ot-finding algorithm can only find approximate even t times t k , and so at t = t k , w e only hav e W ( t k ) ≈ V ( ξ ( t k )). F or this reason, to ensure the con v ergence of the algorithm, at t = t k , we make the correction W ( t k ) = V ( ξ ( t k )). If we let W ( t k ) = W k , the expression of W ( t ) on the in terv al [ t k , t k +1 ) b ecomes W ( t ) = W k e − α ( t − t k ) , (18) The function Z ( t ), on [ t k , t k +1 ), is then given by the Z ( t ) = W k e − α ( t − t k ) − ξ ( t ) T P ξ ( t ) , (19) where ξ ( t ) is giv en b y equation (4). The first deriv ative with respect to time, along the tra jectories of ξ ( t ) is d Z ( t ) dt = − W k αe − α ( t − t k ) − ∇ t V . (20) T o decide whether to tak e a Newton step or a bisection step, we first compute an iterate with Newton’s metho d. If the new iterate is located within the previously identified in terv al [ t min , t max ], it is accepted. Oth- erwise, the Newton iterate is rejected and instead a bisection iterate (the mid-p oin t of the searc h in terv al) is computed. The in terv al [ t min , t max ] is then up dated. 10 Algorithm 3 describes the root-finding pro cedure in details. It is a sligh tly mo dified v ersion of the hybrid Newton-bisection algorithm found in [13]. T o make the notations shorter, from now on we refer to d Z ( t ) /dt as ∇ t Z ( t ). Algorithm 3 Root-Finding Algorithm 1: function Newton-Bisection( t min , t max ) 2: if Z ( t min ) == 0 then 3: return t min 4: if Z ( t max ) == 0 then 5: return t max 6: t ← ( t min + t max ) / 2 7: ∆ t ← t max − t min , ∆ t old ← ∆ t 8: compute Z ( t ), ∇ t Z ( t ) 9: while iter ≤ MaxIter do 10: step ← Z ( t ) ∇ t Z ( t ) 11: if t min ≥ t − step or t max ≤ t − step or | ∆ t old | 2 < | step | then 12: ∆ t old ← ∆ t 13: ∆ t ← ( t max − t min ) / 2 14: t ← t min + ∆ t 15: else 16: ∆ t old ← ∆ t 17: ∆ t ← step 18: t ← t − ∆ t 19: if | ∆ t | < tol 2 then return t 20: if Z ( t ) > 0 then t min ← t 21: else t max ← t 22: return t k +1 The algorithm starts b y making sure that neither t min nor t max are the ro ot, the pro cedure is exited if it is the case. Chec king whether t min is a root or not should be p erformed before the pre-pro cessing, but for the sak e of separation, w e include it in the root-finding algorithm at this stage. The iterate t is initialized as the midpoint of the in terv al [ t min , t max ]. The v ariables ∆ t and ∆ t old store the curren t and the former step lengths, resp ectiv ely . W e compute Z ( t ) and ∇ t Z ( t ) in order to compute the New- ton step. The condition on line 13 of Algorithm 3 decides whether a Newton step is taken or rejected. If by taking the Newton step w e exceed t max or regress below t min or if Newton’s algorithm is too slo w, the New- ton step is rejected, and a bisection step is tak en instead. Lines 14 to 16 represen t a bisection step, whereas lines 18 to 20 represent the case where the Newton step is tak en. After the new iterate is computed, we ev aluate Z ( t ) at that p oint. If Z ( t ) 11 is p ositive, the new iterate is located before the ro ot, and it b ecomes t min . Otherwise, the current iterate b ecome t max . The algorithm terminates when the change in t b etw een t wo consecutiv e iterates is to o small, i.e. when the step length becomes smaller than a tolerance tol 2 . 3.3 Summary of the Self-T riggered Algorithm The three steps of the self-triggered algorithm, describ ed separately so far, are grouped in the order in which they are called, in Algorithm 4. Algorithm 4 Self-T riggered Algorithm 1: pro cedure Self-triggered 2: ρ k = MINIMIZA TION ( t k ) 3: if Z ( ρ k ) == 0 then 4: t k +1 = ρ k 5: [ t min , t max ] = PRE-PR OCESSING ( ρ k ) 6: t k +1 = NEWTON-BISECTION( t min , t max ) The main contribution of this pap er ab out the design of self-triggered stabilizing controllers is in tro duced in the following prop osition Prop osition 1. Let λ max b e the solution to problem (12). If w e choose α betw een 0 and | λ max | , Algorithm 4 provides up date instan ts t k for the con trol law u ( t ), given b y Equation (2), suc h that System (1) is asymp- totically stable. The proof for Proposition 1 is given in details in [10]. In what follo ws, a brief summary of the proof is given. Since W ( t ) decreases exp onentially to w ard zero, w e need to sho w that V ( ξ ( t )) < W ( t ) for all t (or equiv alently that Z ( t ) > 0 for all t ) to prov e that System (1) is asymptotically stable. W e know that in the interv al ( t k − 1 , t k ), k ≥ 1, Z ( t ) > 0. And since Algorithm 4 predicts the time t k when Z ( t ) approac hes zero from abov e, at t = t k , the control law u ( t ) is updated so that Z ( t ) b ecomes strictly p ositiv e again. Therefore, Z ( t ) > 0 for all t . 4 Numerical Sim ulation Consider the follo wing third order L TI system [14], ˙ x ( t ) = 1 1 0 − 2 0 4 5 4 − 7 x ( t ) + − 1 0 1 u ( t ) , with initial state x 0 = [ − 2 3 5] T . The system is unstable with poles at − 8 . 58, 0 . 58, 2 . 00. W e stabilize the system with a state-feedback control la w with feedback gain K = 8 . 38 26 . 36 10 . 38 , 12 that places the poles at − 1 . 14 ± 1 . 35 i , − 5 . 71. Solving the generalized eigen v alue problem (12) yields λ max = 2 . 28 and P = 275 . 7 1025 . 5 577 . 9 1025 . 5 3840 . 1 2173 . 5 577 . 9 2173 . 5 1234 . 1 . (21) W e select α = 2 . 18 s − 1 and W 0 = 1 . 3 V ( x 0 ). W e sim ulate the system’s operation for 7 s, with a sampling p erio d T s = 10 − 3 . The v alues of the parameters required by the minimization algorithm and the ro ot-finding algorithm are giv en in T able 1. T able 1: V alues of the p arameters needed in the self-triggered con trol algorithm P arameter V alue M axI ter 50 β 0 . 35 κ 1 0 . 01 tol 1 10 − 5 κ 2 0 . 25 tol 2 10 − 5 at t = 0 The tolerance tol 2 , at whic h the root-finding algorithm terminates, is set dynamically . Suc h a choice is motiv ated by the exp onential decrease of W ( t ), which tends to zero as time tends to infinity . If tol 2 is constant, at some p oint, W ( t ) can decrease below this tolerance, and so do es V ( ξ ( t )), leading to a small Z ( t ) that could be mistaken for the ro ot, when there is actually no intersection. Therefore, we index the v alue of tol 2 on W k . As long as W k > 1, tol 2 = 10 − 5 as giv en, but W=when W k < 1, then tol 2 is decreased according to the follo wing equation tol 2 = 10 − 5 − φ , with φ = d| log 10 ( W k ) |e . A t t = 0, we apply the control law u ( t 0 ) = − K x 0 and w e compute the instan t t 1 using the self-triggered algorithm. The system is then on an op en-lo op configuration, only maintaining a control v alue of u ( t 0 ), un til the clo ck signal displays the time t 1 . At this point, the op eration is re- p eated. Figure 4a shows the time evolution of the functions V ( ξ ( t )) and W ( t ). It sho ws that V ( ξ ( t )) remains below W ( t ) at all times, which prov es that the algorithm manages to identify correctly the times at which even ts o c- cur, inducing an update of the control law. Even when the tw o functions approac h zero, the in tersections are still detected as sho wn on Figure 4b, whic h singles out an ev ent at t = 6 . 476 s and W ( t ) = 0 . 0948. The zoom on the even t at t = 6 . 476 s shows that the up date of the con trol la w is carried out one time step b efore the in tersection o ccurs. This is due 13 to the fact that the control can only b e updated at m ultiples of the sim ula- tion sampling p erio d T s . F or this reason, when an intersection is predicted somewhere betw een sampling instan ts t = 6 . 476 s and t = 6 . 477 s, we up- date the con trol la w at the earlier instant, t = 6 . 476 s, to prev en t the PLF from crossing the threshold. By contrast, in the ev ent-triggered con- trol algorithm on which this approach is based, the even t is detected one time step after it occurs. F rom this p oint of view, the self-triggered con- trol algorithm represents another impro vemen t on ev ent-triggered con trol. The three state v ariables, shown on Figure 4c, tend to equilibrium and the k x ( t ) k stabilizes below 0 . 05 within 6 . 94 s. The stabilizing con trol law is sho wn on Figure 4d. This figure shows the unev en distribution of up dates in time. Figure 4d also includes a zoom on the control in the time interv al [4 s , 7 s], whic h emphasizes the async hronous nature of the up dates, and whic h is not visible on the larger figure. 0 1 2 3 4 5 6 7 0 5 10 15 10 4 Time V W (a) PLF and threshold 6.4759 6.47595 6.476 6.47605 6.4761 6.47615 0.0942 0.09425 0.0943 0.09435 0.0944 0.09445 0.0945 0.09455 0.0946 0.09465 0.0947 Time V W (b) Zoom on even t at t = 6 . 476 s 0 1 2 3 4 5 6 7 Time -40 -30 -20 -10 0 10 20 30 40 50 60 states x 1 x 2 x 3 (c) States 0 1 2 3 4 5 6 7 Time -150 -100 -50 0 50 100 150 control 4 5 6 7 (d) Self-triggered con trol Figure 4: Simulation results of self-triggered control. 14 Figure 5: The running times of the self-triggered algorithm versus the in ter- ev en t times. T able 2 lists the first six even t times with the corresp onding in ter-even t times t k − t k +1 and running times of the self-triggered control algorithm. W e notice that for our exp erimental conditions, the algorithm’s running time is muc h smaller than the corresp onding inter-ev ent time, allowing the online use of the algorithm. Moreo ver, the running time decreases as w e go further in time, the highest running time b eing the first call of the algorithm, but this call can b e made offline. Even tually , the running time settles around 0 . 002 s. Additionally , matrices M , L , Λ, Γ and γ are computed offline, and thus do not affect running time. Figure 5 further illustrates the disparity betw een the running times of the algorithm and the inter-ev ent times. T able 2: The first 6 ev ents Up date time In ter-even t time Running time 0.453 0.453 0.0481 0.691 0.238 0.0081 1.228 0.537 0.0043 1.403 0.175 0.0029 1.641 0.238 0.0089 2.328 0.687 0.0030 5 Conclusion W e presented a self-triggered con trol algorithm for linear time-inv ariant systems. The approach approximately predicts the times at whic h a pseudo-Ly apuno v function associated to the system reaches an upp er limit, which are the times at whic h the system ceases to b e stable and 15 the control needs to b e up dated. These time instants are approximated using numerical metho ds in tw o stages. In the first stage, a minimization algorithm lo cates the time ρ k at which the pseudo-Lyapuno v function reac hes a minimum v alue in the in terv al betw een tw o ev ents. In the sec- ond stage, a root-finding algorithm initialized at ρ k appro ximates the time of the next even t. The strength of this ro ot-finding metho d is that it combines a globally con v ergen t method with a lo cally conv ergent metho d. The globally con- v ergen t method ensures conv ergence to the righ t solution while the locally con v ergen t metho d speeds up the conv ergence. Additionally , the mini- mization stage guaran tees that the algorithm is initialized with a v alue close to the region of attraction of the actual ro ot. The con vergence and sp eed properties of this metho d mak e it suitable for both offline and on- line implementations. T o further v alidate this approach, the next step w ould b e to apply the self-triggered control algorithm on a real system. This w ould allow us to test its efficiency against the uncertainties encountered in practical application. Another p ersp ective w ould b e to extend this metho d to solve the problem of reference tracking, as this in volv es, in addition to stabilizing the system, the difficulty of detecting the changes in the reference tra jectory . 6 Ac kno wledgmen t This work has b een partially supported by the LabEx PERSYV AL-Lab (ANR-11-61 LABX-0025-01). A One-dimensional Systems In the case of one-dimensional systems, the lo cal minimum of V ( ξ ( t )) can b e found analytically . In what follows, w e giv e a detailed pro cedure to determine ρ k analytically . W e consider the first order L TI system describ ed as ˙ x ( t ) = ax ( t ) + bu ( t ) , y ( t ) = cx ( t ) , (22) where x ( t ) , u ( t ) ∈ R , and a, b, c ∈ R ∗ , ∀ t > 0. Let x k denote x ( t k ). The ev ent-triggered con trol la w is giv en b y u ( t ) = − K x k and System (22) in its closed-loop form is giv en b y ˙ x ( t ) = ax ( t ) − bK x k , ∀ t ∈ [ t k , t k +1 ) , (23) Since we assumed that a 6 = 0, the augmen ted system describ ed by Equation (3) is not needed for the scalar case. The differential equa- 16 tion (23) admits a unique solution for t > t k , given by x ( t ) = bK a + 1 − bK a e a ( t − t k ) x k . (24) T o System (22), we asso ciate a Ly apuno v-lik e function of the form V ( x ( t )) = px ( t ) 2 , (25) where p > 0 is a solution to the Ly apunov inequality 2 p ( a − bK ) ≤ − q , (26) where q > 0 is a user-defined design parameter. The minimum of V ( x ( t )) corresp onds to 0 = dV ( x ( t )) dt = 2 p ( ax ( t ) − bK x k ) x ( t ) . (27) Equation (27) admits tw o solutions, x ( t ) = 0, and x ( t ) = bK x k /a . Ho w ev er, the solution x ( t ) = bK x k /a is impossible as it is equiv alen t to bK a + 1 − bK a e a ( t − t k ) x k = bK x k a , e a ( t − t k ) 1 − bK a = 0 . W e kno w that e a ( t − t k ) 6 = 0 and w e cannot c ho ose K such that bK/a = 1 or else we w ould destabilize the system. Therefore, in the scalar case, dV /dt = 0, if and only if x ( t ) = 0. Consequen tly , the local minima of V ( x ( t )) o ccur only when x ( t ) = 0 and ρ k can b e directly computed from Equation (24) 1 − bK a e a ( ρ k − t k ) + bK a x k = 0 . W e kno w that x k 6 = 0, b ecause at t = t k , V ( x k ) = px 2 k = W ( t k ) 6 = 0, hence x k 6 = 0. Therefore, the times ρ k are given by the expression ρ k = t k + 1 a log bK bK − a . (28) W e can alw a ys take the logarithm of bK/ ( bK − a ) b ecause this is alw ays a p ositive quantit y , as can be seen from the follo wing proof. • Case a > 0 : The feedbac k gain is c hosen suc h that a − bK < 0. Then, bK − a > 0, and bK > a > 0. Since the numerator and denominator are b oth p ositiv e, then bK/ ( bK − a ) > 0. Moreov er, bK/ ( bK − a ) > 1, proving that the ρ k computed by Equation (28) o ccurs indeed after t k . 17 • Case a < 0 : If the open-lo op system is already stable, the ob jective of the con trol is certainly to place the p ole further to the left. Then, the feedback gain is chosen such that a − bK < a < 0. Then, we m ust hav e bK > 0 and bK − a > 0. Consequen tly , as in the previous case, bK/ ( bK − a ) > 0. Even if in this case bK/ ( bK − a ) < 1, the ρ k giv en b y Equation (28) still occurs after t k . Equation (28) is indep enden t of x k , indicating that the interv al [ t k , ρ k ] has the same length for all k . References [1] Durand S, Guerrero-Castellanos JF, Lozano-Leal R. Self-triggered con trol for the stabilization of linear systems. In: 9th In ternational Conference on Electrical Engineering, Computing Science and Auto- matic Control (CCE). ; 2012. [2] V elasco M, Mart ´ ı P , Bini E. Optimal-sampling-inspired Self-T riggered con trol. In: International Conference on Ev ent-based Control, Com- m unication, and Signal Pro cessing (EBCCSP). ; 2015. [3] Kishida M. Even t-triggered Control with Self-triggered Sampling for Discrete-time Uncertain Systems. IEEE T r ansactions on Automatic Contr ol 2018; 64(3): 1273-1279. [4] Mazo M, Anta A, T abuada P . On Self-T riggered Control for Linear Systems: Guaran tees and Complexit y . In: Pro ceedings of the Euro- p ean Control Conference. ; 2009; Budap est, Hungary . [5] Kobay ashi K, Hiraishi K. Self-T riggered Optimal Con trol of Lin- ear Systems Using Con vex Quadratic Programming. In: AMC2014- Y okohama. ; 2014; Y ok ohama, Japan. [6] W ang X, Lemmon MD. Self-T riggered F eedbac k Control Systems With Finite-Gain L 2 Stabilit y . IEEE T ransactions on Automatic Contr ol 2009; 54(3): 452-467. [7] W ang X, Lemmon MD. Self-triggered feedback systems with state- indep enden t disturbances. In: 2009 American Control Conference. ; 2009; St. Louis, MO, USA. [8] Kobay ashi K, Hiraishi K. Self-triggered model predictiv e con trol with dela y compensation for netw ork ed con trol systems. In: 38th Ann ual Conference on IEEE Industrial Electronics Society . ; 2012. [9] Henriksson E, Quevedo DE, Peters EGW, Sandb erg H, Johansson KH. Multiple-loop self-triggered model predictiv e control for net work sc heduling and control. IEEE T r ansactions on Contr ol Systems T e ch- nolo gy 2015; 23(6): 2167-2181. [10] Zobiri F, Meslem N, Bid ´ egaray-F esquet B. Even t-triggered stabilizing con trollers based on an exponentially decreasing threshold. In: Third In ternational Conference on Ev ent-Based Control, Communications, and Signal Processing, F unchal, Portugal. ; 2017. 18 [11] Gill PE, Murray W, W right MH. Pr actic al optimization . Elsevier Academic Press . 1986. [12] Boyd S, V anden b erghe L. Convex Optimization . Cambridge univ er- sit y press, New Y ork . 2014. [13] Press WH, T euk olsky S, Flannery BP , V etterling WT. Numerical r e cip es : the art of scientific c omputing . Cam bridge Universit y Press. third edition ed. 2007. [14] Dorf R C, Bishop R. Mo dern contr ol systems . Pearson Education, Inc. . 2014. 19
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment