Distributed Event-Triggered Consensus Control of Discrete-Time Linear Multi-Agent Systems under LQ Performance Constraints
This paper proposes a distributed event-triggered control method that not only guarantees consensus of multi-agent systems but also satisfies a prescribed LQ performance constraint. Taking the standard distributed control scheme with all-time communi…
Authors: Shumpei Nishida, Kunihisa Okano
Distrib uted Event-T rigger ed Consensus Control of Discr ete-T ime Linear Multi-Agent Systems under LQ P erf ormance Constraints Shumpei Nishida and Kunihisa Okano Abstract — This paper proposes a distrib uted event-trigger ed control method that not only guarantees consensus of multi- agent systems b ut also satisfies a pr escribed LQ perf ormance constraint. T aking the standard distributed control scheme with all-time communication as a baseline, we consider the problem of designing an event-triggered communication rule such that the resulting LQ cost satisfies a performance constraint with respect to the baseline cost while consensus is achieved. F or general linear agents over an undirected graph, we employ local state predi ctors and a local triggering condition based only on information a vailable to each agent. W e then deriv e a sufficient condition for the proposed method to satisfy the per- formance constraint and guarantee consensus. In addition, we develop a tractable parameter design method for selecting the triggering parameters offline. Numerical examples demonstrate the effectiv eness of the proposed method. I . I N T RO D U C T I O N Distributed and cooperativ e control of multi-agent systems has been studied extensi vely because of its broad applica- tions, including vehicle formation [1] and power networks [2]. In particular , the consensus problem, which aims to make all agents reach agreement through local information exchange [3], is one of the fundamental problems and continues to attract significant attention [4]–[7]. Ho we ver , when communication among agents is implemented over digital networks, frequent information exchange is often impractical due to limited communication resources. This motiv ates distributed e vent-triggered control, in which agents transmit information only when prescribed conditions are satisfied, thereby reducing unnecessary communication while achieving consensus [8]–[11]. Despite this progress, most existing studies on distributed ev ent-triggered consensus control focus mainly on whether consensus is achiev ed, with only limited analysis of control performance under ev ent-triggered communication. Since consensus only describes whether the agents asymptotically reach agreement, it does not directly quantify closed-loop performance, such as transient beha vior . T o address this limitation, sev eral studies ha ve examined the control per - formance of distributed ev ent-triggered consensus control using quadratic performance criteria [12]–[15]. For example, [12] and [13] show that time-triggered control can outper- form event-triggered control in some settings, whereas [14] proposes an ev ent-triggered control method that achiev es a lower cost than time-triggered control. Moreover , [15] shows This work was supported by JST SPRING Grant Number JPMJSP2101 and JSPS KAKENHI Grant Number 25K01453. The authors are with the Graduate School of Science and En- gineering, Ritsumeikan Univ ersity , Shiga 525-8577 Japan. E-mails: { re0158ff@ed. , kokano@fc. } ritsumei.ac.jp . that using more local information can improve closed-loop performance but eliminate the performance advantage of ev ent-triggered control. Although these studies clarify perfor - mance properties of e vent-triggered consensus control, they are limited to simplified settings, such as integrator dynam- ics, special communication graphs, and performance criteria that quantify only disagreement among agents. Therefore, it remains unclear ho w to design distrib uted ev ent-triggered communication mechanisms for general linear multi-agent systems with explicit performance guarantees. In this paper , we study distributed e vent-triggered con- sensus control for discrete-time linear multi-agent systems under an LQ performance constraint. W e consider a quadratic cost that penalizes both disagreement among agents and input energy , and take a distributed control scheme with all- time communication as the baseline. W e then formulate the problem of designing a distributed ev ent-triggered communi- cation mechanism that achieves consensus while satisfying a prescribed performance bound with respect to this baseline. The main difficulty is that the performance requirement is global, whereas triggering decisions must be made locally and asynchronously using only locally av ailable information. T o address this dif ficulty , we introduce a local performance index that quantifies the ef fect of local triggering on the global cost, and deri v e a distributed triggering rule by decom- posing the allow able performance degradation across agents. The resulting ev ent-triggered method uses only locally av ail- able information and is designed to satisfy the given LQ performance constraint. T o ensure implementability under intermittent communication, we also introduce local state predictors for neighbor agents based on transmitted data. W ith this framework, we derive suf ficient conditions under which the proposed method guarantees both consensus and the prescribed performance level. W e also provide a tractable design method for the triggering parameters. In contrast to existing studies [12]–[15], the proposed framew ork is not restricted to integrator dynamics or to performance criteria that quantify only disagreement among agents. Moreov er , un- like con ventional norm-based triggering rules in distributed ev ent-triggered consensus control, the proposed triggering rule is deriv ed from a performance-oriented analysis. This paper is organized as follo ws. Section II formulates the consensus problem under the LQ performance constraint. Section III presents preliminary results on consensus control. Section IV proposes the distributed ev ent-triggered control method, and Section V describes the parameter design pro- cedure. Section VI demonstrates a numerical example, and Section VII concludes the paper . Notation: W e denote by R and Z ≥ 0 the sets of real numbers and nonnegati ve integers, respecti vely . The n × n identity matrix is denoted by I n , and diag( d 1 , . . . , d n ) is the diagonal matrix with diagonal entries d 1 , . . . , d n . The Kronecker product of matrices A and B is written as A ⊗ B . The vector 1 N ∈ R N is the vector all of whose components are equal to 1 . For x ∈ R n , its Euclidean norm is defined by ∥ x ∥ : = √ x ⊤ x . I I . P RO B L E M F O R M U L A T I O N Consider a multi-agent system consisting of N homoge- neous agents. Each agent is modeled as x i [ k + 1] = Ax i [ k ] + B u i [ k ] , i ∈ { 1 , . . . , N } , (1) where x i [ k ] ∈ R n and u i [ k ] ∈ R m denote the state and control input, respecti vely . W e assume that the pair ( A, B ) is stabilizable, whereas A is not necessarily stable. The communication topology is represented by a connected weighted undirected graph G . W e consider the following performance measure: J ( x [0]) = ∞ X k =0 x ⊤ [ k ]( L ⊗ Q ) x [ k ] + u ⊤ [ k ]( I N ⊗ R ) u [ k ] , (2) where L is the Laplacian matrix of G , Q and R are positiv e definite matrices, and x [ k ] : = [ x ⊤ 1 [ k ] . . . x ⊤ N [ k ]] ⊤ , u [ k ] : = [ u ⊤ 1 [ k ] . . . u ⊤ N [ k ]] ⊤ . This cost captures both disagreement among the agents and input energy . In this paper , we study a distributed ev ent-triggered control scheme that guarantees a prescribed lev el of LQ performance with respect to a scheme with all-time communication. As a baseline, we consider the standard distributed control law u i [ k ] = − cF ζ i [ k ] , ζ i [ k ] : = X j ∈N i a ij ( x i [ k ] − x j [ k ]) , (3) where a ij is the ( i, j ) th entry of the weighted adjacency matrix of G , c > 0 is a coupling gain, and F ∈ R m × n is a feedback gain. W e denote by J all ( x [0]) the LQ cost achieved by (3), where all agents exchange the information required to compute ζ i [ k ] at e very time step. Let t i ℓ denote the transmission time of agent i . W e assume that communication is delay-free. Using the most recently transmitted information, agent i maintains local copies of the predicted states of its neighbors and of itself. For each j ∈ N i ∪ { i } , these local copies are updated according to ˆ x j [ k + 1] = A ˆ x j [ k ] + B ˆ u j [ k ] , ˆ x j [ t j ℓ ] = x j [ t j ℓ ] , (4) where ˆ u j [ k ] = u j [ t j ℓ ] , t j ℓ ≤ k < t j ℓ +1 . Since all such agents use the same transmitted state at each transmission instant and the same predictor dynamics, these local copies coincide. Using (4), we consider the following distributed event-triggered controller: u i [ k ] = − cF ˆ ζ i [ k ] , ˆ ζ i [ k ] : = X j ∈N i a ij ( ˆ x i [ k ] − ˆ x j [ k ]) . (5) W e denote by J etc ( x [0]) the LQ cost (2) under the ev ent- triggered controller (5). Under the above setup, giv en a constant ρ ≥ 1 , our objec- tiv e is to design a distributed ev ent-triggered communication mechanism such that J etc ( x [0]) ≤ ρJ all ( x [0]) , ∀ x [0] ∈ R N n , (6) while ensuring consensus of the multi-agent system, i.e., lim k →∞ ∥ x i [ k ] − x j [ k ] ∥ = 0 , ∀ i, j ∈ { 1 , . . . , N } . (7) Remark 1: The inequality (6) is a performance require- ment with respect to the baseline distributed control scheme with all-time communication introduced abov e. In particu- lar , J all ( x [0]) is the cost achiev ed by this scheme and is not necessarily the globally optimal value of the LQ cost. Hence, the constant ρ specifies the allow able performance degradation with respect to the selected baseline while reduc- ing transmissions through event-triggered communication. A more detailed discussion of this point is giv en in Section III. Remark 2: If local state-feedback controllers of the form u i [ k ] = F ℓ x i [ k ] are allo wed, then, since ( A, B ) is stabiliz- able, one can choose F ℓ such that A + B F ℓ is Schur stable. In that case, all agents conv erge to the origin independently , and hence consensus in the sense of (7) may be achieved without any communication. In this paper , we restrict our attention to the distributed diffusi ve control laws (3) and (5), which are implemented using information exchanged over the graph, rather than to local stabilization. Before presenting the proposed event-triggered control method, we provide suf ficient conditions under which the multi-agent system achiev es consensus under the distributed controller (3), and derive an explicit form of the correspond- ing LQ cost. I I I . C O N S E N S U S A N D L Q C O S T U N D E R A L L - T I M E C O M M U N I C A T I O N In this section, we consider consensus control of multi- agent systems under all-time communication, and deriv e the corresponding LQ cost. W ith the distributed controller (3), the overall closed-loop system is giv en by x [ k + 1] = ( I N ⊗ A − cL ⊗ B F ) x [ k ] . (8) Since G is weighted undirected and connected, the Laplacian matrix L is symmetric positive semidefinite and has a simple zero eigen value. Let U ∈ R N × N be an orthogonal matrix such that U ⊤ LU = Λ : = diag(0 , λ 2 ( L ) , . . . , λ N ( L )) , where 0 < λ 2 ( L ) ≤ · · · ≤ λ N ( L ) are the nonzero eigen v alues of L . W e partition U as U = [ 1 √ N 1 N U 2 ] . Define ˜ x [ k ] : = ( U ⊤ ⊗ I n ) x [ k ] with ˜ x [ k ] = [ ˜ x ⊤ 1 [ k ] . . . ˜ x ⊤ N [ k ]] ⊤ . Then, the closed-loop system (8) is transformed into ˜ x [ k + 1] = ( I N ⊗ A − c Λ ⊗ B F ) ˜ x [ k ] , (9) that is, ˜ x 1 [ k + 1] = A ˜ x 1 [ k ] , (10) ˜ x i [ k + 1] = ( A − cλ i ( L ) B F ) ˜ x i [ k ] , i = 2 , . . . , N . (11) From (10), we can see that ˜ x 1 [ k ] represents the component of x [ k ] along the consensus subspace since ˜ x 1 [ k ] = 1 √ N 1 ⊤ N ⊗ I n x [ k ] implies that the corresponding component in R N n is 1 √ N 1 N ⊗ I n ˜ x 1 [ k ] ∈ Im( 1 N ⊗ I n ) . In contrast, (11) describes the disagreement modes among the agents since x [ k ] = ( U ⊗ I n ) ˜ x [ k ] = 1 √ N 1 N ⊗ I n ˜ x 1 [ k ] + N X i =2 ( v i ⊗ I n ) ˜ x i [ k ] , where v i is the i th column vector of U 2 . This means that x [ k ] is decomposed into an agreement vector and disagree- ment vectors. Hence, consensus is achiev ed if and only if A − cλ i ( L ) B F is Schur stable for all i = 2 , . . . , N [4]. Moreov er , if consensus is achieved, all agents asymptotically follow the same trajectory [16]: x i [ k ] − 1 N N X j =1 A k x j [0] → 0 , ∀ i ∈ { 1 , . . . , N } . Let Q ℓ ∈ R n × n be a positive semidefinite matrix and ( A, Q 1 / 2 ℓ ) be detectable. Since ( A, B ) is stabilizable, there exists a matrix P ≻ 0 which is a solution to the following Riccati equation. P = Q ℓ + A ⊤ P A − A ⊤ P B R + B ⊤ P B − 1 B ⊤ P A. (12) W e design the feedback gain F as F = R + B ⊤ P B − 1 B ⊤ P A. (13) Moreov er , we define θ : = λ min ( R ) λ max ( R + B ⊤ P B ) 1 / 2 . Then, we obtain a sufficient condition on c to guarantee consensus of the multi-agent system (1) [6]. Lemma 1: Suppose that ( A, B ) is stabilizable and ( A, Q 1 / 2 ℓ ) is detectable. Assume that the undirected graph G is connected. If c > 0 is chosen so that 1 (1 + θ ) λ 2 ( L ) < c < 1 (1 − θ ) λ N ( L ) , (14) the multi-agent system (1) reaches consensus under the distributed controller (3). Under the above condition, the LQ cost (2) is giv en by J all ( x [0]) = N X i =2 ∞ X k =0 ˜ x ⊤ i [ k ] W i ˜ x i [ k ] , where W i : = λ i ( L ) Q + c 2 λ 2 i ( L ) F ⊤ RF . Therefore, the corresponding LQ cost admits the following explicit form. Cor ollary 1: Suppose that F is giv en by (13) and c is designed so as to satisfy (14). Then, the resulting LQ cost (2) is J all ( x [0]) = N X i =2 ˜ x ⊤ i [0] P i ˜ x i [0] , (15) where P i , i = 2 , . . . , N is the solution to P i = ( A − cλ i ( L ) B F ) ⊤ P i ( A − cλ i ( L ) B F ) + W i . (16) Pr oof: Under the distributed controller (3), we have u [ k ] = − ( cL ⊗ F ) x [ k ] . Hence, J all ( x [0]) can be transformed into J all ( x [0]) = ∞ X k =0 x ⊤ [ k ] L ⊗ Q + c 2 L 2 ⊗ F ⊤ RF x [ k ] = ∞ X k =0 ˜ x ⊤ [ k ] Λ ⊗ Q + c 2 Λ 2 ⊗ F ⊤ RF ˜ x [ k ] = ∞ X k =0 N X i =2 ˜ x ⊤ i [ k ] λ i ( L ) Q + c 2 λ i ( L ) 2 F ⊤ RF ˜ x i [ k ] = N X i =2 ∞ X k =0 ˜ x ⊤ i [ k ] W i ˜ x i [ k ] , where the term corresponding to i = 1 vanishes since λ 1 ( L ) = 0 . Let ˜ A i : = A − cλ i ( L ) B F for each i = 2 , . . . , N . Using (9) and (16), it follows that ˜ x ⊤ i [ k ] W i ˜ x i [ k ] = ˜ x ⊤ i [ k ] P i − ˜ A ⊤ i P i ˜ A i ˜ x i [ k ] = ˜ x ⊤ i [ k ] P i ˜ x i [ k ] − ˜ x ⊤ i [ k + 1] P i ˜ x i [ k + 1] . Thus, for each K ∈ Z ≥ 0 , we hav e K X k =0 ˜ x ⊤ i [ k ] W i ˜ x i [ k ] = K X k =0 ˜ x ⊤ i [ k ] P i ˜ x i [ k ] − ˜ x ⊤ i [ k + 1] P i ˜ x i [ k + 1] = ˜ x ⊤ i [0] P i ˜ x i [0] − ˜ x ⊤ i [ K + 1] P i ˜ x i [ K + 1] . Since ˜ A i is Schur stable, we hav e ˜ x i [ k ] → 0 as k → ∞ . Hence, lim K →∞ ˜ x ⊤ i [ K + 1] P i ˜ x i [ K + 1] = 0 . By taking K → ∞ , we obtain ∞ X k =0 ˜ x ⊤ i [ k ] W i ˜ x i [ k ] = ˜ x ⊤ i [0] P i ˜ x i [0] , and summing the abov e equation over i = 2 , . . . , N yields (15). Remark 3: The cost J all ( x [0]) achieved by the distributed controller (3) is not necessarily the minimum value of the LQ cost (2). This is because the feedback gain (13) is not obtained by directly solving the optimal control problem associated with (2). Instead, it is designed from a local Riccati equation (12) with the local weighting matrices. On the other hand, the cost (2) is a global performance criterion that depends on the disagreement among agents through the Laplacian matrix L , and (15) inv olves the nonzero eigenv alues of L through P i . Hence, the controller design takes local quadratic criteria into account, but does not directly minimize the global LQ cost (2). Therefore, the resulting v alue J all ( x [0]) is interpreted as the cost of the selected baseline distributed controller , rather than the globally minimal value of (2). W e note that the existence of a unique positiv e definite solution P i ≻ 0 is guaranteed since A − cλ i ( L ) B F is Schur stable and W i ≻ 0 for i = 2 , . . . , N . In the follo wing section, we employ the feedback gain F gi ven by (13) and coupling gain c satisfying (14). I V . D I S T R I B U T E D E V E N T - T R I G G E R E D C O N S E N S U S U N D E R L Q P E R F O R M A N C E C O N S T R A I N T S In this section, we propose a distributed event-triggered consensus control method under the LQ performance con- straint. First, we introduce several notations used throughout this paper . Let e i [ k ] : = ˆ x i [ k ] − x i [ k ] and e [ k ] : = [ e ⊤ 1 [ k ] . . . e ⊤ N [ k ]] ⊤ . W e consider the following variable: ¯ x i [ k ] = A ˆ x i [ k − 1] + B ˆ u i [ k − 1] , (17) which serves as the a priori prediction of x i [ k ] , with the initial state ¯ x i [0] = 0 . Accordingly , ˆ x i [ k ] is updated by ˆ x i [ k ] = ( x i [ k ] if k = t i ℓ for ℓ ∈ Z ≥ 0 , ¯ x i [ k ] otherwise . W e also define ¯ e i [ k ] : = ¯ x i [ k ] − x i [ k ] and ¯ e [ k ] : = [ ¯ e ⊤ 1 [ k ] . . . ¯ e ⊤ N [ k ]] ⊤ . In addition, let us consider ˆ ϕ i [ k ] : = 1 2 X j ∈N i a ij ( ˆ x i [ k ] − ˆ x j [ k ]) ⊤ Q ( ˆ x i [ k ] − ˆ x j [ k ]) + c 2 ˆ ζ ⊤ i [ k ] F ⊤ RF ˆ ζ i [ k ] , (18) which depends only on locally av ailable information to agent i . Suppose that each agent shares its state at time k = 0 , i.e., t i 0 = 0 for all i ∈ { 1 , . . . , N } . Then, we hav e ˆ x i [0] = x i [0] . Using (18), each agent asynchronously determines transmission instants { t i ℓ +1 } ℓ ∈ Z ≥ 0 as follows: t i ℓ +1 = min { k > t i ℓ : ¯ e ⊤ i [ k ]Ω i ¯ e i [ k ] > σ ˆ ϕ i [ k − 1] } , (19) where Ω i ≻ 0 and σ > 0 are design parameters. Intuitively , agent i triggers when the weighted prediction error becomes too large compared to the local estimated stage cost. This is because a larger disagreement among agents can tolerate a larger prediction error , whereas near consensus even a small prediction error can hav e a relatively large impact on the LQ performance. Note that as a consequence of (17) and (19), it holds that e i [ k ] = ( 0 if k = t i ℓ for ℓ ∈ Z ≥ 0 , ¯ e i [ k ] otherwise , which implies that e ⊤ i [ k ]Ω i e i [ k ] ≤ σ ˆ ϕ i [ k − 1] , ∀ k ∈ Z ≥ 1 , i ∈ { 1 , . . . , N } . W e also hav e N X i =1 ˆ ϕ i [ k ] = g all ( ˆ x [ k ]) , (20) where g all ( x ) : = x ⊤ S x and S : = L ⊗ Q + c 2 L 2 ⊗ F ⊤ RF . For i ∈ { 2 , . . . , N } , define a matrix Γ i : = c 2 λ i ( L ) 2 F ⊤ B ⊤ P i B F + 1 ε c 2 λ i ( L ) 2 F ⊤ B ⊤ P i ( A − cλ i ( L ) B F ) × W − 1 i ( A − cλ i ( L ) B F ) ⊤ P i B F , and Γ U : = ( U ⊗ I n ) diag (0 , Γ 2 , . . . , Γ N )( U ⊤ ⊗ I n ) . W e also define α S : = λ max ˆ Ω − 1 / 2 S ˆ Ω − 1 / 2 , α S u : = λ max ˆ Ω − 1 / 2 S u ˆ Ω − 1 / 2 , α Γ U : = λ max ˆ Ω − 1 / 2 Γ U ˆ Ω − 1 / 2 , where ˆ Ω : = diag(Ω 1 , . . . , Ω N ) and S u : = c 2 L 2 ⊗ F ⊤ RF . Then the following lemmas hold, which are used to establish the main results. Lemma 2: Suppose that σ satisfies 0 < σ < 1 α S , (21) and choose a constant η > 0 such that η > σ α S 1 − σ α S . (22) Then, it holds that ∞ X k =0 e ⊤ [ k ] ˆ Ω e [ k ] ≤ β ∞ X k =0 g all ( x [ k ]) , where β : = σ (1 + η ) 1 − σ (1 + η − 1 ) α S . (23) Pr oof: F or any x, e ∈ R N n , Y oung’ s inequality gives g all ( x + e ) = ( x + e ) ⊤ S ( x + e ) ≤ (1 + η ) x ⊤ S x + 1 + η − 1 e ⊤ S e. (24) By the definition of α S , and since ˆ Ω − 1 / 2 S ˆ Ω − 1 / 2 is symmet- ric, it follows that S ⪯ α S ˆ Ω . Hence, e ⊤ S e ≤ α S e ⊤ ˆ Ω e. (25) Combining (24) and (25), we obtain g all ( x + e ) ≤ (1 + η ) g all ( x ) + 1 + η − 1 α S e ⊤ ˆ Ω e. (26) Moreov er , (19) and (20) imply e ⊤ [ k ] ˆ Ω e [ k ] ≤ σ g all ( ˆ x [ k − 1]) , k ≥ 1 . (27) Since ˆ x [ k − 1] = x [ k − 1] + e [ k − 1] , it follows from (26) and (27) that e ⊤ [ k ] ˆ Ω e [ k ] ≤ σ g all ( x [ k − 1] + e [ k − 1]) ≤ σ (1 + η ) g all ( x [ k − 1]) + σ 1 + η − 1 α S e ⊤ [ k − 1] ˆ Ω e [ k − 1] (28) for k ≥ 1 . Now define a k : = e ⊤ [ k ] ˆ Ω e [ k ] and b k : = g all ( x [ k ]) . Then (28) becomes a k ≤ σ (1 + η ) b k − 1 + σ 1 + η − 1 α S a k − 1 , k ≥ 1 . (29) Let T ≥ 2 . Summing (29) from k = 1 to k = T − 1 yields T − 1 X k =1 a k ≤ σ (1 + η ) T − 2 X k =0 b k + σ 1 + η − 1 α S T − 2 X k =0 a k . Since ˆ x i [0] = x i [0] for all i , we have e [0] = 0 , and therefore a 0 = 0 . It also holds that a k ≥ 0 and b k ≥ 0 for all k by the definitions. Hence, T − 1 X k =0 a k ≤ σ (1 + η ) T − 1 X k =0 b k + σ 1 + η − 1 α S T − 1 X k =0 a k . Since the condition (22) is equiv alent to 1 − σ 1 + η − 1 α S > 0 , we hav e T − 1 X k =0 a k ≤ σ (1 + η ) 1 − σ (1 + η − 1 ) α S T − 1 X k =0 b k = β T − 1 X k =0 g all ( x [ k ]) . (30) Finally , since P T − 1 k =0 a k and P T − 1 k =0 b k are monotone nonde- creasing in T , taking T → ∞ yields ∞ X k =0 e ⊤ [ k ] ˆ Ω e [ k ] ≤ β ∞ X k =0 g all ( x [ k ]) , which completes the proof. Lemma 3: Let β be the constant given in (23), and ε ∈ (0 , 1) . If it holds that 1 − ε − α Γ U β > 0 , (31) then γ : = 1 1 − ε − α Γ U β (32) is well-defined and satisfies ∞ X k =0 g all ( x [ k ]) ≤ γ J all ( x [0]) . Pr oof: Define the function V ( x [ k ]) : = N X i =2 ˜ x ⊤ i [ k ] P i ˜ x i [ k ] . By Corollary 1, we have V ( x [0]) = J all ( x [0]) . W e first show that, for all k ∈ Z ≥ 0 and ε ∈ (0 , 1) , it holds that V ( x [ k + 1]) − V ( x [ k ]) ≤ − (1 − ε ) g all ( x [ k ]) + e ⊤ [ k ]Γ U e [ k ] . (33) For each i ∈ { 2 , . . . , N } , let ∆ V i : = ˜ x ⊤ i [ k + 1] P i ˜ x i [ k + 1] − ˜ x ⊤ i [ k ] P i ˜ x i [ k ] . Under the event-triggered controller (5) with (19), the dis- agreement dynamics is expressed as ˜ x i [ k + 1] = ( A − cλ i ( L ) B F ) ˜ x i [ k ] (34) − cλ i ( L ) B F ˜ e i [ k ] , i ∈ { 2 , . . . , N } , where ˜ e [ k ] : = ( U ⊤ ⊗ I n ) e [ k ] . Using (16) and (34), we obtain ∆ V i = − ˜ x ⊤ i [ k ] W i ˜ x i [ k ] − 2 ˜ x ⊤ i [ k ]( A − cλ i ( L ) B F ) ⊤ P i ( cλ i ( L ) B F )˜ e i [ k ] + c 2 λ i ( L ) 2 ˜ e ⊤ i [ k ] F ⊤ B ⊤ P i B F ˜ e i [ k ] . By applying Y oung’ s inequality , for any ε ∈ (0 , 1) , it holds that 2 ( − ˜ x i [ k ]) ⊤ ( A − cλ i ( L ) B F ) ⊤ P i ( cλ i ( L ) B F )˜ e i [ k ] ≤ ε ˜ x ⊤ i [ k ] W i ˜ x i [ k ] + c 2 λ i ( L ) 2 ε ˜ e ⊤ i [ k ] F ⊤ B ⊤ P i ( A − cλ i ( L ) B F ) W − 1 i × ( A − cλ i ( L ) B F ) ⊤ P i B F ˜ e i [ k ] . Hence, by the definition of Γ i , we hav e ∆ V i ≤ − (1 − ε ) ˜ x ⊤ i [ k ] W i ˜ x i [ k ] + ˜ e ⊤ i [ k ]Γ i ˜ e i [ k ] . Summing this inequality over i = 2 , . . . , N , and using the definitions of g all and Γ U , we obtain (33). Moreov er , by the definition of α Γ U and the symmetry of ˆ Ω − 1 / 2 Γ U ˆ Ω − 1 / 2 , we hav e e ⊤ [ k ]Γ U e [ k ] ≤ α Γ U e ⊤ [ k ] ˆ Ω e [ k ] . (35) Therefore, (33) and (35) giv e V ( x [ k +1]) − V ( x [ k ]) ≤ − (1 − ε ) g all ( x [ k ])+ α Γ U e ⊤ [ k ] ˆ Ω e [ k ] . Let T ≥ 1 . Summing the abov e inequality from k = 0 to k = T − 1 yields V ( x [ T ]) − V ( x [0]) ≤ − (1 − ε ) T − 1 X k =0 g all ( x [ k ]) + α Γ U T − 1 X k =0 e ⊤ [ k ] ˆ Ω e [ k ] . From (30), it follows that V ( x [ T ]) − V ( x [0]) ≤ − (1 − ε − α Γ U β ) T − 1 X k =0 g all ( x [ k ]) . Since V ( x [ T ]) ≥ 0 , we obtain (1 − ε − α Γ U β ) T − 1 X k =0 g all ( x [ k ]) ≤ V ( x [0]) . Under the assumption (31), this implies T − 1 X k =0 g all ( x [ k ]) ≤ γ V ( x [0]) . Finally , since g all ( x [ k ]) ≥ 0 for all k , P T − 1 k =0 g all ( x [ k ]) is nondecreasing in T and upper bounded. Therefore, by taking T → ∞ , we obtain ∞ X k =0 g all ( x [ k ]) ≤ γ V ( x [0]) = γ J all ( x [0]) , which completes the proof. Then the following theorem establishes a sufficient condi- tion to guarantee (6) for a given ρ > 1 . Theor em 1: For β and γ given in (23) and (32), respec- tiv ely , define ˆ ρ : = 1 + δ + 1 + δ − 1 α S u β γ , (36) where δ > 0 . Then, for a given ρ > 1 , if ˆ ρ ≤ ρ holds, we hav e J etc ( x [0]) ≤ ρJ all ( x [0]) , ∀ x [0] ∈ R N n (37) under (5) and (19). Pr oof: Let us define g etc ( x, e ) : = x ⊤ ( L ⊗ Q ) x + ( x + e ) ⊤ S u ( x + e ) . (38) T o prov e this theorem, it suffices to show that P k g etc ( x [ k ]) is bounded in terms of P k g all ( x [ k ]) under (5) and (19). Since L ⊗ Q ⪰ 0 , we hav e x ⊤ [ k ] S u x [ k ] ≤ x ⊤ [ k ]( L ⊗ Q ) x [ k ] + x ⊤ [ k ] S u x [ k ] = g all ( x [ k ]) (39) for x [ k ] ∈ R n . Applying (39) and Y oung’ s inequality to (38) yields g etc ( x [ k ] , e [ k ]) ≤ (1 + δ ) g all ( x [ k ]) + (1 + δ − 1 ) e ⊤ [ k ] S u e [ k ] . By summing up for k = 0 , 1 , . . . , we obtain J etc ( x [0]) ≤ (1 + δ ) ∞ X k =0 g all ( x [ k ]) + (1 + δ − 1 ) ∞ X k =0 e ⊤ [ k ] S u e [ k ] . Moreov er , since ˆ Ω − 1 / 2 S u ˆ Ω − 1 / 2 is symmetric, it holds that e ⊤ [ k ] S u e [ k ] ≤ α S u e ⊤ [ k ] ˆ Ω e [ k ] [17]. Thus, J etc ( x [0]) ≤ (1 + δ ) ∞ X k =0 g all ( x [ k ]) + (1 + δ − 1 ) α S u ∞ X k =0 e ⊤ [ k ] ˆ Ω e [ k ] . Using Lemmas 2 and 3, we obtain J etc ( x [0]) ≤ ˆ ρJ all ( x [0]) for all x [0] ∈ R N n . Hence, if ˆ ρ ≤ ρ holds, we have (37). Remark 4: Although the performance constraint in (6) is posed for ρ ≥ 1 , Theorem 1 implies that, for any ρ > 1 , the design parameters can be chosen so that ˆ ρ ≤ ρ holds. Indeed, for any admissible choice of σ , δ , and ε , one has ˆ ρ > 1 . On the other hand, by choosing δ > 0 and ε ∈ (0 , 1) sufficiently small, and then taking σ > 0 suf ficiently small, we can make ˆ ρ arbitrarily close to 1 . As a consequence of Theorem 1, the multi-agent system achiev es consensus while meeting the performance constraint (6). Theor em 2: Let ρ > 1 be given. Suppose that { Ω i } N i =1 , σ , δ , ε , and η are chosen so that ˆ ρ ≤ ρ holds. Then, under (5) and (19), the multi-agent system (1) reaches consensus. Pr oof: By Corollary 1, under the feedback gain F in (13) and the coupling gain c satisfying (14), J all ( x [0]) is finite for e very x [0] ∈ R N n . Since ˆ ρ ≤ ρ , Theorem 1 yields J etc ( x [0]) ≤ ρJ all ( x [0]) < ∞ , ∀ x [0] ∈ R N n . Hence, ∞ X k =0 x ⊤ [ k ]( L ⊗ Q ) x [ k ] < ∞ , (40) where we use x ⊤ [ k ]( L ⊗ Q ) x [ k ] ≤ g etc ( x [ k ] , e [ k ]) for all k . Recalling that ˜ x [ k ] = ( U ⊤ ⊗ I n ) x [ k ] and U ⊤ LU = Λ , we hav e x ⊤ [ k ]( L ⊗ Q ) x [ k ] = ˜ x ⊤ [ k ](Λ ⊗ Q ) ˜ x [ k ] = N X i =2 λ i ( L ) ˜ x ⊤ i [ k ] Q ˜ x i [ k ] . (41) Combining (40) and (41) yields ∞ X k =0 N X i =2 λ i ( L ) ˜ x ⊤ i [ k ] Q ˜ x i [ k ] < ∞ . Since the graph is connected, we have λ i ( L ) > 0 for all i ∈ { 2 , . . . , N } , and since Q ≻ 0 , it follows that ∞ X k =0 ∥ ˜ x i [ k ] ∥ 2 < ∞ , ∀ i ∈ { 2 , . . . , N } . Therefore, we obtain lim k →∞ ˜ x i [ k ] = 0 , ∀ i ∈ { 2 , . . . , N } , which completes the proof. In the next section, we provide a design method for { Ω i } N i =1 , σ , δ , ε , and η so as to satisfy ˆ ρ ≤ ρ for a fixed ρ > 1 . V . P A R A M E T E R D E S I G N For a gi ven performance lev el ρ > 1 , we e xplain ho w to choose { Ω i } N i =1 , η , δ , σ , and ε so that ˆ ρ ≤ ρ holds. Although the condition ˆ ρ ≤ ρ is satisfied with a sufficiently small σ , such a choice would result in frequent transmissions, which is not a desirable behavior . Therefore, the main idea is to make σ as large as possible while meeting the performance constraint, so that the triggering condition (19) is less likely to be violated, resulting in fewer transmissions. The design procedure described below is carried out of fline. Howe ver , increasing σ alone does not necessarily reduce the number of transmissions, since whether to transmit depends on the relative scale of σ and { Ω i } N i =1 . Even if σ is large, the number of transmissions may not decrease if Ω i is scaled accordingly . T o fix the scale of { Ω i } N i =1 , we impose P N i =1 tr(Ω i ) = 1 , and then seek to make σ as lar ge as possible under this normalization. Since directly maximizing σ over all design parameters is intractable, we design the parameters sequentially . F or a fixed ε , we first design { Ω i } N i =1 . Next, for the resulting { Ω i } N i =1 and a given σ , we deri ve closed-form expressions for η and δ that minimize ˆ ρ . Then we solve a maximization problem for σ . Finally , we conduct a grid search ov er ε and select the value that yields the lar gest feasible σ . W e begin with the design of { Ω i } N i =1 for a fixed ε . By the definitions of α S , α Γ U , and α S u , for a positiv e scalar κ , the inequalities α S ≤ κ, α S u ≤ κ, α Γ U ≤ κ are respectiv ely equi valent to S ⪯ κ ˆ Ω , S u ⪯ κ ˆ Ω , Γ U ⪯ κ ˆ Ω . Hence, for each fix ed ε , we consider the problem of finding the smallest such upper bound κ : min Ω 1 , . . . , Ω N κ s . t . S ⪯ κ ˆ Ω , S u ⪯ κ ˆ Ω , Γ U ⪯ κ ˆ Ω , ˆ Ω = diag(Ω 1 , . . . , Ω N ) , Ω i ≻ 0 , i ∈ { 1 , . . . , N } , N X i =1 tr(Ω i ) = 1 . (42) Howe ver , Problem (42) is not directly tractable, since the constraints contain bilinear matrix inequalities through the term κ ˆ Ω . T o remove this bilinearity , we introduce new variables X i : = κ Ω i , i ∈ { 1 , . . . , N } , and define ˆ X : = diag( X 1 , . . . , X N ) . Then, it holds that ˆ X = κ ˆ Ω . Moreov er , since P N i =1 tr(Ω i ) = 1 , we hav e N X i =1 tr( X i ) = κ N X i =1 tr(Ω i ) = κ. Therefore, for each fixed ε , Problem (42) can be transformed into min X 1 , . . . , X N N X i =1 tr( X i ) s . t . S ⪯ ˆ X , S u ⪯ ˆ X , Γ U ⪯ ˆ X , ˆ X = diag( X 1 , . . . , X N ) , X i ≻ 0 , i ∈ { 1 , . . . , N } . (43) Problem (43) is a semidefinite program and can be solved using standard solvers, such as CVX [18]. Let { X ⋆ i ( ε ) } N i =1 be an optimal solution to Problem (43). Then the optimal value of Problem (42) is κ ⋆ ( ε ) = N X i =1 tr( X ⋆ i ( ε )) , and an optimal solution is given by Ω ⋆ i ( ε ) = X ⋆ i ( ε ) κ ⋆ ( ε ) , i ∈ { 1 , . . . , N } . (44) W e denote by α S ( ε ) , α S u ( ε ) , and α Γ U ( ε ) the corresponding values of α S , α S u , and α Γ U with Ω ⋆ i ( ε ) , respectiv ely . W e next choose η for the fixed ε . For fixed ˆ Ω ⋆ ( ε ) and σ ∈ (0 , 1 /α S ( ε )) , we seek η that minimizes ˆ ρ , which is achiev ed by minimizing β . Indeed, by dif ferentiating ˆ ρ with respect to β , we obtain d ˆ ρ dβ = (1 + δ − 1 ) α S u ( ε )(1 − ε ) + α Γ U ( ε )(1 + δ ) (1 − ε − α Γ U ( ε ) β ) 2 ≥ 0 . This means that, for fixed ε , ˆ Ω ⋆ ( ε ) , σ , and δ > 0 , ˆ ρ is a nondecreasing function of β on 1 − ε − α Γ U ( ε ) β > 0 . Hence, it suffices to minimize β with respect to η for minimizing ˆ ρ . The following lemma provides such a choice of η . Lemma 4: Fix ε and σ ∈ (0 , 1 /α S ( ε )) . Then η = η ⋆ ( σ ; ε ) : = p σ α S ( ε ) 1 − p σ α S ( ε ) , (45) is the unique minimizer of β ov er η > σ α S ( ε ) 1 − σ α S ( ε ) , (46) and the corresponding minimum value is gi ven by β min ( σ ; ε ) = σ 1 − p σ α S ( ε ) 2 . Pr oof: Define a : = σ α S ( ε ) . Since σ ∈ (0 , 1 /α S ( ε )) , it follows that 0 < a < 1 . Then, (46) can be written as η > a 1 − a . (47) For fix ed σ and ε , we define β σ,ε ( η ) : = σ (1 + η ) 1 − σ (1 + η − 1 ) α S ( ε ) . Multiplying the numerator and denominator by η , we can rewrite it as β σ,ε ( η ) = σ η (1 + η ) (1 − a ) η − a . Since η > a/ (1 − a ) , we hav e (1 − a ) η − a > 0 on the feasible set (47). Differentiating β σ,ε ( η ) with respect to η yields d dη β σ,ε ( η ) = σ h ( η ) (1 − a ) η − a 2 , where h ( η ) : = (1 − a ) η 2 − 2 aη − a. Since ((1 − a ) η − a ) 2 > 0 for η satisfying (47), the sign of dβ σ,ε ( η ) /dη is determined by the sign of h ( η ) . By solving h ( η ) = 0 , we obtain η = 2 a ± p 4 a 2 + 4 a (1 − a ) 2(1 − a ) = a ± √ a 1 − a . Hence, the two roots are giv en by η − = a − √ a 1 − a = − √ a 1 + √ a < 0 , η + = a + √ a 1 − a = √ a 1 − √ a . Therefore, η − does not satisfy (47). Moreover , η + − a 1 − a = √ a 1 − √ a − a 1 − a = √ a 1 − a > 0 , which means that η + satisfies (47). Since 1 − a > 0 , the quadratic function h ( η ) is con ve x, and since its roots are η − < 0 and η + > a 1 − a , it follows that ( h ( η ) < 0 , a 1 − a < η < η + , h ( η ) > 0 , η > η + . Hence, the sign of dβ σ,ε ( η ) /dη is ( d dη β σ,ε ( η ) < 0 , a 1 − a < η < η + , d dη β σ,ε ( η ) > 0 , η > η + . Therefore, β σ,ε ( η ) is strictly decreasing on ( a/ (1 − a ) , η + ) and strictly increasing on ( η + , ∞ ) . Consequently , η = p σ α S ( ε ) 1 − p σ α S ( ε ) is the unique minimizer of β σ,ε ( η ) o ver (47). Finally , we deriv e the corresponding minimum value of β . Since 1 + η ⋆ ( σ ; ε ) = 1 + √ a 1 − √ a = 1 1 − √ a , and (1 − a ) η ⋆ ( σ ; ε ) − a = (1 − a ) √ a 1 − √ a − a = (1 + √ a ) √ a − a = √ a, it follows that β min ( σ ; ε ) = β σ,ε ( η ⋆ ( σ ; ε )) = σ η ⋆ ( σ ; ε ) 1 + η ⋆ ( σ ; ε ) (1 − a ) η ⋆ ( σ ; ε ) − a = σ (1 − √ a ) 2 = σ 1 − p σ α S ( ε ) 2 . This completes the proof. W e no w select δ for the same fixed ε . After substituting η = η ⋆ ( σ ; ε ) into ˆ ρ , the numerator of ˆ ρ becomes 1 + δ + 1 + δ − 1 α S u ( ε ) β min ( σ ; ε ) . Thus, for fixed σ and ε , we minimize this expression with respect to δ > 0 in order to minimize ˆ ρ . The following lemma provides such a choice of δ . Lemma 5: Fix ε and σ ∈ (0 , 1 /α S ( ε )) and define f ( δ ) : = 1 + δ + 1 + δ − 1 α S u ( ε ) β min ( σ ; ε ) . Then δ = δ ⋆ ( σ ; ε ) : = p α S u ( ε ) β min ( σ ; ε ) (48) is the unique minimizer of f ( δ ) ov er δ > 0 , and the corresponding minimum value is f ( δ ⋆ ( σ ; ε )) = 1 + p α S u ( ε ) β min ( σ ; ε ) 2 . Pr oof: Define b : = α S u ( ε ) β min ( σ ; ε ) . Since L is the Laplacian matrix of the connected graph, and R ≻ 0 and F = 0 , S u = c 2 L 2 ⊗ F ⊤ RF = 0 , and α S u ( ε ) > 0 . Furthermore, under the condition (21), β min ( σ ; ε ) > 0 . Hence, we have b > 0 . Then, for all δ > 0 , the function f ( δ ) is written as f ( δ ) = 1 + b + δ + b δ . By differentiating f ( δ ) with respect to δ , we obtain d dδ f ( δ ) = 1 − b δ 2 = δ 2 − b δ 2 . Since δ 2 > 0 for all δ > 0 , the sign of d dδ f ( δ ) is determined by the sign of δ 2 − b . Therefore, d dδ f ( δ ) < 0 , 0 < δ < √ b, d dδ f ( δ ) = 0 , δ = √ b, d dδ f ( δ ) > 0 , δ > √ b. Hence, f ( δ ) is strictly decreasing on (0 , √ b ) and strictly increasing on ( √ b, ∞ ) . Then it follows that δ = √ b = p α S u ( ε ) β min ( σ ; ε ) is the unique minimizer of f ( δ ) over δ > 0 . Finally , substituting δ ⋆ ( σ ; ε ) = √ b into f ( δ ) yields f ( δ ⋆ ( σ ; ε )) = 1 + b + √ b + b √ b = 1 + 2 √ b + b = 1 + √ b 2 = 1 + p α S u ( ε ) β min ( σ ; ε ) 2 , which completes the proof. After substituting (45) and (48) into (36), for a giv en ε , the sufficient condition in Theorem 1 reduces to ρ ( σ ; ε ) ≤ ρ, where ρ ( σ ; ε ) = 1 + p α S u ( ε ) β min ( σ ; ε ) 2 1 − ε − α Γ U ( ε ) β min ( σ ; ε ) . Therefore, for each fixed ε , we determine σ by solving the following maximization problem: max σ σ s . t . 0 < σ < 1 α S ( ε ) , 1 − ε − α Γ U ( ε ) β min ( σ ; ε ) > 0 , ρ ( σ ; ε ) ≤ ρ. (49) This is an optimization problem ov er a one-dimensional feasible set. Moreover , since β min ( σ ; ε ) is strictly increasing in σ , the feasible set is an interval, and hence a bisection method can be used to compute its largest feasible point numerically . W e denote the v alue obtained by this bisection by σ ⋆ ( ε ) . Finally , we design ε and accordingly determine the other parameters. Since ρ ( σ ; ε ) is increasing in ε , a larger ε makes the performance constraint more restrictiv e. On the other hand, ε appears only in the denominator of ρ , so we determine it by a grid search. More precisely , we consider a finite set E ⊂ (0 , 1 − 1 /ρ ) , where the upper bound follows from the necessary condition 1 1 − ε < ρ, which is obtained by taking the limit β min ( σ ; ε ) ↓ 0 . For each ε ∈ E , we solve the maximization problem (49) and obtain the corresponding optimizer σ ⋆ ( ε ) . W e then select ε ⋆ ∈ arg max ε ∈E σ ⋆ ( ε ) , and set σ = σ ⋆ ( ε ⋆ ) and Ω i = Ω ⋆ i ( ε ⋆ ) , η = η ⋆ ( σ ⋆ ; ε ⋆ ) , δ = δ ⋆ ( σ ⋆ ; ε ⋆ ) according to (44), (45), and (48), respectively . Remark 5: The proposed parameter design method is a heuristic procedure, and hence the obtained parameters do not necessarily yield the largest possible value of σ . Like- wise, while we aim to reduce the number of transmissions by enlarging σ , it does not in itself guarantee the smallest number of transmissions. Instead, the design method in this section provides a tractable way to search for parameters that satisfy the given performance constraint and may lead to reduced communication in practice. V I . N U M E R I C A L E X A M P L E A. Simulation Setup Consider a group of eight oscillators whose dynamics are giv en by the following continuous-time linear system: ˙ x i ( t ) = 0 1 − 1 0 x i ( t ) + 0 1 u i ( t ) , i ∈ { 1 , . . . , 8 } , (50) where x i ( t ) = [ x i, 1 ( t ) x i, 2 ( t )] ∈ R 2 is the state vector and u i ( t ) ∈ R is the control input. The communication graph is the cycle graph depicted in Fig. 1. W e discretize (50) using 1 2 3 4 5 6 7 8 Fig. 1. Communication graph. k 0 20 40 60 80 100 120 140 160 180 200 x i; 1 [ k ] -0.2 -0.1 0 0.1 0.2 i = 1 i = 2 i = 3 i = 4 i = 5 i = 6 i = 7 i = 8 k 0 20 40 60 80 100 120 140 160 180 200 x i; 1 [ k ] -0.2 -0.1 0 0.1 0.2 i = 1 i = 2 i = 3 i = 4 i = 5 i = 6 i = 7 i = 8 Fig. 2. State trajectories x i, 1 [ k ] for the all-time communication scheme (top) and the proposed ev ent-triggered method (bottom). the sampling period 0 . 05 and obtain the discrete-time system (1). The weighting matrices Q , Q ℓ , and R are set as Q = Q ℓ = 2 0 0 1 , R = 1 , and ρ = 1 . 2 . By applying the parameter design method in Section V for ρ = 1 . 2 , we obtain ε = 0 . 0380 , σ = 8 . 985 × 10 − 6 , Ω i = 0 . 0286 0 . 0372 0 . 0372 0 . 0964 , i ∈ { 1 , . . . , 8 } , and ρ = 1 . 1999 , which confirms that the designed parameters satisfy the condition in Theorem 1. B. Simulation Results Fig. 2 depicts the trajectories of the first state component under the all-time communication scheme and the proposed ev ent-triggered method, and Fig. 3 presents the correspond- ing control inputs. These figures sho w that the proposed method also driv es all agents to consensus asymptotically . In addition, the transient responses under the proposed method remain close to those under the all-time communication scheme o ver the control horizon. This indicates that, although the agents communicate intermittently , the resulting closed- loop beha vior remains similar to that of the baseline dis- tributed control with all-time communication. k 0 20 40 60 80 100 120 140 160 180 200 u i [ k ] -0.5 0 0.5 i = 1 i = 2 i = 3 i = 4 i = 5 i = 6 i = 7 i = 8 k 0 20 40 60 80 100 120 140 160 180 200 u i [ k ] -0.5 0 0.5 i = 1 i = 2 i = 3 i = 4 i = 5 i = 6 i = 7 i = 8 Fig. 3. Control inputs for the all-time communication scheme (top) and the proposed ev ent-triggered method (bottom). Fig. 4 illustrates the transmission instants of each agent for the proposed event-triggered method. As shown in the figure, transmissions occur asynchronously and only at a subset of the time instants. In the present simulation, transmissions oc- cur at about 20 . 8% of all possible transmission opportunities across all agents. C. Discussion For this numerical example, the proposed method achiev es consensus with trajectories close to those of the all-time communication scheme while using only about 20 . 8% of all possible transmissions. In this simulation, the realized cost ratio over the horizon 200 is J etc /J all = 0 . 9960 , which is well below the prescribed performance level ρ = 1 . 2 . This result suggests the conservatism of the present frame work. First, Theorem 1 provides only a suf ficient condition for J etc ( x [0]) ≤ ρJ all ( x [0]) . In addition, the global LQ per- formance constraint must be satisfied through asynchronous triggering decisions based only on locally av ailable infor- mation. Under this information structure, the exact global performance degradation cannot be e valuated directly by each agent, and the analysis relies on upper bounds e xpressed through scalar worst-case quantities such as α S , α S u , and α Γ U . As a result, the condition is tractable but generally not tight. In addition, the parameter design in Section V is heuris- tic rather than globally optimal. Accordingly , the obtained parameter σ is guaranteed only to satisfy the sufficient con- dition of Theorem 1, and is not guaranteed to be the largest admissible value under the gi ven performance constraint. It is also worth noting that, in this numerical example, the cost ratio satisfies J etc /J all = 0 . 9960 < 1 e ven though the proposed method exhibits fewer transmissions than the all- time communication scheme. As stated in Remark 3, J all is not the globally minimal value of the LQ cost (2). Moreover , the all-time communication scheme and the proposed method use different information structures to compute their control inputs. Hence, the comparison is made between two different 0 20 40 60 80 100 120 140 160 180 200 i = 1 0 1 0 20 40 60 80 100 120 140 160 180 200 i = 2 0 1 0 20 40 60 80 100 120 140 160 180 200 i = 3 0 1 0 20 40 60 80 100 120 140 160 180 200 i = 4 0 1 0 20 40 60 80 100 120 140 160 180 200 i = 5 0 1 0 20 40 60 80 100 120 140 160 180 200 i = 6 0 1 0 20 40 60 80 100 120 140 160 180 200 i = 7 0 1 k 0 20 40 60 80 100 120 140 160 180 200 i = 8 0 1 Fig. 4. Transmission instants of each agent for the proposed ev ent-triggered method, where a value of 1 indicates that agent i transmits at that time instant, and a value of 0 otherwise. distributed controllers under the same performance index (2), rather than between the same controller under different communication rates. Therefore, a higher communication rate does not necessarily imply a lower v alue of (2). Nev ertheless, this numerical example demonstrates the po- tential of the proposed framework for achieving a fav orable trade-off between communication reduction and closed-loop performance. It also suggests that the conservatism stems from the gap between a global performance requirement and local asynchronous triggering decisions. Therefore, one possible way to reduce this conservatism is to establish a framew ork that enables each agent to e valuate the effect of its local triggering decision on the global performance degradation more accurately . V I I . C O N C L U S I O N This paper has studied distributed event-triggered consen- sus control for discrete-time linear multi-agent systems under an LQ performance constraint. W e proposed a distributed ev ent-triggered control method that guarantees the prescribed lev el of LQ performance as well as consensus when the triggering parameters are properly designed. In addition, we presented a tractable parameter design method for obtaining feasible triggering parameters while promoting transmission reduction. Numerical simulations illustrated that the pro- posed method achie ves consensus with transient responses close to those of the all-time communication scheme while reducing the number of transmissions. Future work includes reducing conservatism in the theoretical analysis and param- eter design, and extending the proposed framework to more general communication settings. R E F E R E N C E S [1] W . Ren and R. W . Beard, Distributed Consensus in Multi-V ehicle Cooperative Control: Theory and Applications . Springer, 2008. [2] A. Bidram, F . L. Lewis, and A. Davoudi, “Distributed control systems for small-scale power networks: Using multiagent cooperativ e control theory , ” IEEE Control Syst. Mag. , vol. 34, no. 6, pp. 56–77, 2014. [3] R. Olfati-Saber, J. A. Fax, and R. M. Murray , “Consensus and cooperation in networked multi-agent systems, ” Proc. IEEE , vol. 95, no. 1, pp. 215–233, 2007. [4] K. Hengster-Movric, K. Y ou, F . L. Lewis, and L. Xie, “Synchro- nization of discrete-time multi-agent systems on graphs using Riccati design, ” Automatica , vol. 49, no. 2, pp. 414–423, 2013. [5] J. Jiao, H. L. Trentelman, and M. K. Camlibel, “ A suboptimality approach to distributed linear quadratic optimal control, ” IEEE Tr ans. Autom. Control , vol. 65, no. 3, pp. 1218–1225, 2019. [6] T . Feng, J. Zhang, Y . T ong, and H. Zhang, “Consensusability and global optimality of discrete-time linear multiagent systems, ” IEEE T rans. Cybern. , vol. 52, no. 8, pp. 8227–8238, 2021. [7] L. Y uan and H. Ishii, “Resilient consensus with multi-hop communi- cation, ” IEEE T rans. Autom. Contr ol , vol. 70, pp. 5973–5988, 2025. [8] E. Garcia, Y . Cao, and D. W . Casbeer , “Decentralized event-triggered consensus with general linear dynamics, ” Automatica , vol. 50, no. 10, pp. 2633–2640, 2014. [9] W . Hu, L. Liu, and G. Feng, “Consensus of linear multi-agent systems by distributed event-triggered strategy , ” IEEE Tr ans. Cybern. , vol. 46, no. 1, pp. 148–157, 2015. [10] C. Nowzari, E. Garcia, and J. Cort ´ es, “Event-triggered communication and control of networked systems for multi-agent consensus, ” Auto- matica , vol. 105, pp. 1–27, 2019. [11] R. K. Mishra and H. Ishii, “Event-triggered control for discrete-time multi-agent av erage consensus, ” Int. J. Robust Nonlinear Contr ol , vol. 33, no. 1, pp. 159–176, 2023. [12] D. Meister, F . Aurzada, M. A. Lifshits, and F . Allg ¨ ower , “ Analysis of time-versus event-triggered consensus for a single-integrator multi- agent system, ” in Pr oc. IEEE 61st Conf. Decis. Contr ol , 2022, pp. 441–446. [13] ——, “T ime-versus event-triggered consensus of a single-integrator multi-agent system, ” Nonlinear Analysis: Hybrid Systems , vol. 53, p. 101494, 2024. [14] D. J. Antunes, D. Meister , T . Namerikawa, F . Allg ¨ ower , and W . Heemels, “Consistent event-triggered consensus on complete graphs, ” in Pr oc. 62nd IEEE Conf. Decis. Control , 2023, pp. 3911– 3916. [15] D. Meister, D. J. Antunes, and F . Allg ¨ ower , “How improving perfor- mance may imply losing consistency in event-triggered consensus, ” Automatica , vol. 185, p. 112768, 2026. [16] Z. Li and Z. Duan, Cooperative Contr ol of Multi-Agent Systems: A Consensus Re gion Appr oach . CRC Press, 2017. [17] S. Boyd and L. V andenberghe, Conve x Optimization . Cambridge Univ ersity Press, 2004. [18] M. Grant and S. Boyd, “CVX: Matlab software for disciplined con ve x programming, version 2.1, ” https://cvxr.com/cvx/, 2014.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment