Leader Tracking of Euler-Lagrange Agents on Directed Switching Networks Using A Model-Independent Algorithm

In this paper, we propose a discontinuous distributed model-independent algorithm for a directed network of Euler-Lagrange agents to track the trajectory of a leader with non-constant velocity. We initially study a fixed network and show that the lea…

Authors: Mengbin Ye, Brian D.O. Anderson, Changbin Yu

Leader Tracking of Euler-Lagrange Agents on Directed Switching Networks   Using A Model-Independent Algorithm
1 Leader T racking of Euler -Lagrange Agents on Directed Switching Networks Using A Model-Independent Algorithm Mengbin Y e Brian D.O. Anderson, Life F ellow , IEEE Changbin Y u, Senior Member , IEEE Abstract —In this paper , we pr opose a discontinuous distributed model-independent algorithm for a directed network of Euler- Lagrange agents to track the trajectory of a leader with non- constant velocity . W e initially study a fixed network and show that the leader tracking objective is achie ved semi-globally exponentially fast if the graph contains a directed spanning tree. By model-independent, we mean that each agent executes its algorithm with no knowledge of the parameter values of any agent’ s dynamics. Certain bounds on the agent dynamics (including any disturbances) and network topology information are used to design the control gain. This fact, combined with the algorithm’s model-independence, results in rob ustness to disturbances and modelling uncertainties. Next, a continuous approximation of the algorithm is proposed, which achieves practical tracking with an adjustable tracking error . Last, we show that the algorithm is stable for networks that switch with an explicitly computable dwell time. Numerical simulations are given to show the algorithm’ s effectiveness. Index T erms —model-independent, euler-lagrange agent, di- rected graph, distributed algorithm, tracking, switching network I . I N T R O D UC T I O N C OORDIN A TION of multi-agent systems using dis- tributed algorithms has been widely studied over the past decade [1]. Of recent interest is the study of agents whose dynamics are described using Euler-Lagrange equations of motion, which from here onwards will be referred to as Euler- Lagrange agents (in some literature known as Lagrangian agents). The non-linear Euler-Lagrange equation can be used to model the dynamics of a large class of mechanical, electrical and electro-mechanical systems [2]. Thus, there is signifi- cant motiv ation to study coordination problems with multiple Euler-Lagrange agent. The interaction between agents may be modelled using a graph [1], and the agents collectiv ely form a network. Directed networks capture unilateral interactions (e.g. sensing or communication) between agents and are generally more desirable when compared to undirected networks. T o better place our results in context, two existing ap- proaches for designing coordination algorithms for Euler- This work was supported by the Australian Research Council (ARC) under the ARC grants DP-130103610 and DP-160104500, by the National Natural Science Foundation of China (grant 61375072), and by Data61-CSIR O (formerly NICT A). M. Y e is supported by an Australian Government Research T raining Program (R TP) Scholarship. M.Y e is with the Research School of Engineering, Australian National Uni- versity . C.Y u and B.D.O.Anderson are with the Australian National University and with Hangzhou Dianzi Uni versity , Hangzhou, China. B.D.O. Anderson is also with Data61-CSIRO (formerly NICT A Ltd.), Canberra, Australia. { Mengbin.Ye, Brian.Anderson, Brad.Yu } @anu.edu.au Lagrange networks are revie wed: model-dependent and adap- tiv e algorithms. The aim is to giv e readers an idea of available works; the list is not exhausti ve. The papers [3]–[5] study different coordination objectives, such as consensus or leader tracking, using algorithms that require exact knowledge of the agent models . Specifically , each agent’ s algorithm requires knowledge of its o wn Euler-Lagrange equation in order to ex ecute. The algorithms are therefore less robust to uncer - tainties in the model, e.g. some parameters in the Euler- Lagrange equation may be unknown or uncertain. Recently , the more popular approach is for each agent to use an adaptive algorithm . Specifically , an Euler-Lagrange equation can be linearly parametrised [2] with respect to a set of constant parameters of the equation, e.g. the mass of an arm on a robotic manipulator agent. This parametrisation is then used in an adapti ve algorithm to allow the agent to estimate its own set of constant parameters (which is assumed to be unknown) while simultaneously achieving the multi-agent coordination objectiv e. Using adaptive algorithms, containment control was studied in [6], [7], while leaderless consensus was studied in [7], [8]. Leader tracking algorithms were studied in [9]–[13]. In contrast to the above works, which rely on direct knowl- edge (or adaptive identification) of an agent model, there hav e been relatively few works studying model-independent algorithms, that is, algorithms for obtaining robust controllers. Furthermore, most results study model-independent algorithms on undirected networks. The pioneering work in [14] consid- ered leaderless position consensus, with time-delay considered in [15]. Consensus to the intersection of target sets is studied in [16]. Leader-tracking algorithms are studied in [17]–[20]. Rendezvous to a stationary leader with collision av oidance is studied in [21]. For directed networks, sev eral results are av ailable. Passi vity analysis in [22] showed that synchronisa- tion of the velocities (but not of the positions) is achieved on strongly connected directed networks. Rendezvous to a stationary leader and position consensus was studied in [23] and [24], respectively , but the papers assumed that the agents did not hav e a gravitational term in the dynamics. Leader tracking is studied in [20] but restrictiv e assumptions are placed on the leader . Preliminary work by the authors also appeared in [25], and is further analysed below . A. Motivation for Model-Independent Algorithms Further study of model-independent algorithms is desirable for several reasons. Gi ven a unique Euler-Lagrange equation, 2 determining the minimum number of parameters in an adapti ve algorithm is difficult in general [26]. Moreover , the adaptiv e algorithms require knowledge of the exact equation structure; the algorithms can deal with uncertain constant parameters associated with the agent dynamics but are not robust to unmodelled nonlinear agent dynamics. Model-independent al- gorithms are reminiscent of robust controllers, which stand in conceptual contrast to adaptiv e controllers. Stability and indeed performance is guaranteed giv en limited knowledge of upper bounds on parameters of the multiagent system, and without use of any attempt to identify these parameters. As will be sho wn in this paper, and similarly to [23], [24], model-independent controllers are exponentially stable, with a computable minimum rate of con vergence. Exponentially sta- ble systems are desired over systems which are asymptotically stable, but not exponentially so, because exponentially stable systems offer improved rejection to small amounts of noise and disturbance. Some algorithms r equiring exact knowledge of the Euler-Lagr ange equation hav e been shown to be expo- nentially stable [3], [5]. Further , adaptiv e controllers will yield exponential stability if certain conditions are satisfied, e.g. persistency of excitation. Howe ver , the abov e detailed works using adaptive algorithms have not verified such conditions. B. Contributions of this paper In this paper , we propose a discontinuous model- independent algorithm that allows a directed network of Euler- Lagrange agents to track a leader with arbitrary trajectory . First, we assume that the network is fixed and contains a directed spanning tree. Then, we relax this assumption to allow for a network with switching/dynamic interactions. In order to achiev e stability , a set of scalar control gains must be suffi- ciently large, i.e. satisfy a set of lower bounding inequalities. These inequalities in volve limited knowledge of the bounds on the agent dynamic parameters, limited knowledge of the network topology , and a bound on the initial conditions (which may be arbitrarily large). This last requirement means the algorithm is semi-globally stable; a larger set of allowed initial conditions simply requires recomputing of the control gains. It is also shown that the algorithm is robust to heterogeneous, bounded disturbances for each individual agent. W e now record the points of contrast between this paper and the previously mentioned existing works. While sev eral results have been listed studying leader tracking, most in- volving model-independent algorithms have been studied on undirected graphs. Those which do assume directed graphs primarily use adaptive algorithms . Most model-independent algorithms on directed networks consider position consensus or rendezvous to a stationary leader; introduction of a moving leader greatly increases the difficulty of the problem due to the complex, nonlinear Euler-Lagrange dynamics. Additionally , the work in [23], [24] did not consider the gravitational term in the agent dynamics. The work [20] studies a model- independent leader tracking algorithm on directed graphs, with the restrictiv e assumption that the leader trajectory is gov erned by a marginally stable linear time-inv ariant second order system, and the system matrix kno wn to all agents. A major contribution of this paper is to allow for any arbitrary leader trajectory which satisfies some mild and r easonable smoothness and boundedness pr operties . In addition, [20] does not establish an exponential stability property , whereas the algorithm proposed in this paper does. A preliminary version of this paper appeared in [25]. This paper significantly extends the preliminary version in sev eral aspects. First, we introduce an additional control gain which allows for an additional degree of freedom in selecting the control gains to ensure stability . Moreov er, increasing the new gain ensures stability but at the same time, it does not negati vely affect con vergence rate, unlike in [25]. Second, we address the issues arising from the discontinuous nature of the control algorithm by using an approximation of the signum function. An explicit expression relating the tracking error to the degree of approximation and control gain is derived. Additionally , switching topology is considered. Details of omitted proofs are also now provided. The paper is structured as follows. Section II introduces mathematical preliminaries, and the problem. The problem with fixed network topology , and dynamic topology , is solved in Section III and IV, respectiv ely . Simulations are provided in Section V and the paper concluded in Section VI. I I . B AC K G RO U N D A N D P RO B L E M S TA T E M E N T A. Mathematical Notation and Matrix Theory T o begin with, definitions of notation and several results are now pro vided. The Kronecker product is denoted as ⊗ . Denote the p × p identity matrix as I p and the n -column vector of all ones as 1 n . The l 1 -norm and Euclidean norm of a vector x , and matrix A , are denoted by k ·k 1 and k · k 2 , respecti vely . The signum function is denoted as sgn ( · ) . For an arbitrary vector x , the function sgn ( x ) is defined element-wise. A matrix A = A > that is positive definite (respectiv ely nonnegati ve definite) is denoted by A > 0 (respectiv ely A ≥ 0 ). For two symmetric matrices A , B , A > B is equiv alent to A − B > 0 . For a matrix A = A > , the minimum and maximum eigen values are λ min ( A ) and λ max ( A ) respectiv ely . The following inequalities hold λ min ( A ) > λ max ( B ) ⇒ A > B (1a) λ max ( A + B ) ≤ λ max ( A ) + λ max ( B ) (1b) λ min ( A + B ) ≥ λ min ( A ) + λ min ( B ) (1c) λ min ( A ) x > x ≤ x > Ax ≤ λ max ( A ) x > x (1d) Definition 1. A function f ( x ) : D → R , where D ⊆ R n , is said to be positive definite in D if f ( x ) > 0 for all x ∈ D , except f ( 0 ) = 0 . Lemma 1 (The Schur Complement [27]) . Consider a sym- metric block matrix, partitioned as A =  B C C > D  (2) Then, A > 0 if and only if B > 0 and D − C > B − 1 C > 0 , or equivalently , if and only if D > 0 and B − C D − 1 C > 0 . 3 Lemma 2. Suppose A > 0 is defined as in (2) . Let a quadratic function with ar guments x , y be expr essed as W = [ x > , y > ] A [ x > , y > ] > . Define F := B − C D − 1 C > and G := D − C > B − 1 C . Then, ther e holds λ min ( F ) x > x ≤ x > F x ≤ W (3a) λ min ( G ) y > y ≤ y > Gy ≤ W (3b) Pr oof. W e obtain (3b) by recalling Lemma 1 and observing that W = y > Gy + [ y > C > B − 1 + x > ] B [ B − 1 C y + x ] . An equally straightforward proof yields (3a). Lemma 3. Let g ( x, y ) be a function given as g ( x, y ) = ax 2 + by 2 − cxy 2 − dxy (4) for r eal positive scalars c, d > 0 . Then for a given Y > 0 , ther e exist a, b > 0 such that g ( x, y ) is positive definite for all y ∈ [0 , ∞ ) and x ∈ [0 , X ] . Pr oof. Observing that cxy 2 ≤ c X y 2 for all x ∈ [0 , X ] , yields g ( x, y ) ≥ ax 2 + ( b − c X ) y 2 − dxy (5) if y ∈ [0 , ∞ ) and x ∈ [0 , X ] because c > 0 . For any fixed value of y = y 1 ∈ [0 , ∞ ) , write ¯ g ( x ) = ax 2 + ( b − c X ) y 2 1 − dxy 1 . The discriminant of ¯ g ( x ) is negati ve if b > c X + d 2 4 a (6) which implies that the roots of ¯ g ( x ) are complex, i.e. ¯ g ( x ) > 0 and this holds for any y 1 ∈ [0 , ∞ ) . W e thus conclude that for all y ∈ [0 , ∞ ) and x ∈ [0 , X ] , if a, b satisfies (6), then g ( x, y ) > 0 except the case where g ( x, y ) = 0 if and only if x = y = 0 . Corollary 1. Let h ( x, y ) be a function given as h ( x, y ) = ax 2 + by 2 − cxy 2 − dxy − ex − f y (7) wher e the r eal positive scalars c, d, e, f and two further positive scalars ε, ϑ ar e fixed. Suppose that for given Y , X ther e holds Y − ε > 0 , and X − ϑ > 0 . Define the sets U = { x, y : x ∈ [ X − ϑ, X ] , y > 0 } and V = { x, y : x > 0 , y ∈ [ Y − ε, Y ] } . Define the r egion R = U ∪ V . Then, ther e exist a, b > 0 such that h ( x, y ) is positive definite in R . Pr oof. Observe that h ( x, y ) = g ( x, y ) − ex − f y where g ( x, y ) is defined in Lemma 3. Let a ∗ , b ∗ be such that they satisfy condition (6) in Lemma 3 and thus g ( x, y ) > 0 for x ∈ [0 , X ] and y ∈ [0 , ∞ ) . Note that the positivity condition on g ( x, y ) in Lemma 3 continues to hold for any a ≥ a ∗ and any b ≥ b ∗ . Let a 1 and b 1 be positi ve scalars whose magnitudes will be determined later . Define a = a 1 + a ∗ and b = b 1 + b ∗ . Define z ( x, y ) , a 1 x 2 + b 1 y 2 − ex − f y . Next, consider ( x, ¯ y ) ∈ V , where ¯ y is some fixed value. It follows that z ( x, ¯ y ) = a 1 x 2 − ex + ( b 1 ¯ y 2 − f ¯ y ) (8) Note the discriminant of z ( x, ¯ y ) is D x = e 2 − 4 a 1 ( b 1 ¯ y 2 − f ¯ y ) . It follows that D x < 0 if b 1 ¯ y 2 > f ¯ y + e 2 / 4 a 1 . This is satisfied, independently of ¯ y ∈ [ Y − ε, Y ] , for any b 1 ≥ b 1 ,y , a 1 ≥ a 1 ,y where b 1 ,y > e 2 4 a 1 ,y ( Y − ε ) 2 + f Y − ε (9) because Y − ε ≤ ¯ y . It follows that D x < 0 ⇒ z ( x, y ) > 0 in V . Now , consider ( ¯ x, y ) ∈ U for some fixed value ¯ x . It follows that z ( ¯ x, y ) = b 1 y 2 − f y + ( a 1 ¯ x 2 − e ¯ x ) (10) and note the discriminant of z ( ¯ x, y ) is D y = f 2 − 4 b 1 ( a 1 ¯ x 2 − e ¯ x ) . Suppose that a 1 > e/ X , which ensures that a 1 ¯ x 2 − e ¯ x > 0 . Then, D y < 0 if b 1 ( a 1 ¯ x 2 − e ¯ x ) > f / 4 . This is satisfied, independently of ¯ x ∈ [ X − ϑ, X ] , for any b 1 ≥ b 1 ,x , a 1 ≥ a 1 ,x where b 1 ,x > f 4( a 1 ,x ( X − ϑ ) 2 − e ( X − ϑ )) (11) It follo ws that D y < 0 ⇒ z ( x, y ) > 0 in U . W e conclude that setting b = b ∗ + max[ b 1 ,x , b 1 ,y ] and a = a ∗ + max[ a 1 ,x , a 1 ,y ] , implies h ( x, y ) > 0 in R , except h (0 , 0) = 0 . The results of Lemma 3 and Corollary 1 are almost intu- itiv ely obvious. Howe ver , the detailed statements in the proof lay out explicit inequalities for a, b . These inequalities will be used to show that for any giv en Euler -Lagrange network, contr ol gains can always be found to ensure leader tracking is achieved. B. Graph Theory The agent interactions can be modelled by a weighted directed graph which is denoted as G = ( V , E , A ) , with the set of nodes V = { v 0 , v 1 , . . . , v n } , and with a corresponding set of ordered edges E ⊆ V × V . A directed edge e ij = ( v i , v j ) is outgoing with respect to v i and incoming with respect to v j , and implies that v j is able to obtain some information from v i . The precise nature of this information will be made clear in the sequel. The weighted adjacency matrix A ∈ R ( n +1) × ( n +1) of G has nonnegativ e elements a ij . The elements of A hav e properties such that a ij > 0 ⇔ e j i ∈ E while a ij = 0 if e j i / ∈ E and it is assumed a ii = 0 , ∀ i . The neighbour set of v i is denoted by N i = { v j ∈ V : ( v j , v i ) ∈ E } . The ( n + 1) × ( n + 1) Laplacian matrix, L = { l ij } , of the associated digraph G is defined as l ij = − a ij for j 6 = i and l ij = P n k =1 ,k 6 = i a ik if j = i . A directed spanning tree is a directed graph formed by directed edges of the graph that connects all the nodes, and where e very vertex apart from the root has exactly one parent. A graph is said to contain a spanning tree if a subset of the edges forms a spanning tree. W e make use of the following standard lemma. Lemma 4 ( [28]) . Let G contain a dir ected spanning tr ee, and suppose ther e ar e no edges which ar e incoming to the r oot vertex of the tr ee, which without loss of generality , is set as v 0 . Then the Laplacian of G can be partitioned as L =  0 0 L 11 L 22  (12) and ∃ Γ > 0 which is diagonal and Γ L 22 + L > 22 Γ > 0 . For future use, denote the i th diagonal element of Γ as γ i and define ¯ γ , max i γ i and γ , min i γ i . 4 C. Euler-Lagr ange Systems The i th Euler-Lagrange agent’ s equation of motion is: M i ( q i ) ¨ q i + C i ( q i , ˙ q i ) ˙ q i + g i ( q i ) + ζ i = τ i (13) where q i ( t ) ∈ R p is a vector of the generalised coordinates. Note that from here onwards, we drop the time argument t whene ver there is no ambiguity . The inertia matrix is M i ( q i ) ∈ R p × p , C i ( q i , ˙ q i ) ∈ R p × p is the Coriolis and centrifugal force matrix, g i ∈ R p is the vector of (grav- itational) potential forces and ζ i ( t ) is an unknown, time- varying disturbance. It is assumed that all agents are fully- actuated, with τ i ∈ R p being the control input vector . For each agent, the k th generalised coordinate is denoted using superscript ( k ) ; thus q i = [ q (1) i , ..., q ( p ) i ] > . It is assumed that the systems described using (13) hav e the following properties giv en belo w: P1 The matrix M i ( q i ) is symmetric positiv e definite. P2 There exist scalar constants k m , k M > 0 such that k m I p ≤ M i ( q i ) ≤ k M I p , ∀ i, q i . It follo ws that sup q i k M i k 2 ≤ k M and k m ≤ inf q i k M − 1 i k 2 holds ∀ i . P3 The matrix C i ( q i , ˙ q i ) is defined such that ˙ M i − 2 C i is ske w-symmetric, i.e. ˙ M i = C i + C > i . P4 There exist scalar constants k C , k g > 0 such that k C i k 2 ≤ k C k ˙ q i k 2 , ∀ i, ˙ q i and k g i k 2 < k g , ∀ i . P5 There exists a constant k ζ such that k ζ i k 2 ≤ k ζ , ∀ i . Properties P1-P4 are standard and widely assumed properties of Euler-Lagrange dynamical systems, see [2] for details. Property P5 is a reasonable assumption on disturbances. D. Pr oblem Statement The leader is denoted as agent 0, i.e. vertex v 0 , with q 0 (t) and ˙ q 0 ( t ) being its time-varying generalised coordinates and generalised velocity , respectiv ely . The objectiv e is to develop a model-independent, distributed algorithm which allo ws a directed network of Euler-Lagrange agents to synchronise and track the trajectory of the leader . The leader tracking objectiv e is said to be achie ved if lim t →∞ k q i ( t ) − q 0 ( t ) k 2 = 0 and lim t →∞ k ˙ q i ( t ) − ˙ q 0 ( t ) k 2 = 0 for all i = 1 , . . . , n . By model-independent, we mean that the algorithm does not contain M i , C i , g i ∀ i nor make use of an associated linear parametrisation. T wo mild assumptions are now giv en. Assumption 1. The leader trajectory q 0 ( t ) is a C 2 func- tion with derivatives ˙ q 0 and ¨ q 0 which ar e bounded as k 1 n ⊗ ˙ q 0 k 2 ≤ k p and k 1 n ⊗ ¨ q 0 k 2 ≤ k q . The positive constants k p , k q ar e known a priori. Assumption 2. All possible initial conditions lie in some fixed but arbitrarily lar ge set that is known. In particular , k q i k 2 ≤ k a / √ n and k ˙ q i k 2 ≤ k b / √ n , where k a , k b ar e known a priori. These two assumptions are not unreasonable, as many systems will have an expected operating range for q and ˙ q . The follower agents’ capability to sense relativ e states is captured by the directed graph G A with an associated Laplacian L A . In Section III, we assume G A is fixed. Later in Section IV, it is assumed that G A is dynamic, i.e. time-varying. Thus, if a ij > 0 then agent i can sense q i − q j and ˙ q i − ˙ q j . W e denote the neighbour set of agent i on G A as N Ai . W e further assume that agent i can measure its o wn q i and ˙ q i . A second weighted and directed time-varying graph G B ( t ) , with the associated Laplacian L B ( t ) , exists between the followers to communicate estimates of the leader’ s state. Denote the neighbour set of agent i on G B ( t ) at time t as N B i ( t ) . Note that v j ∈ N B i ( t ) when agent j communicates directly to agent i its estimates of the leader’ s state at time t (the precise nature of this estimate is described in Section III-A). Further note that G A is not necessarily equal to G B and so N Ai 6 = N B i in general. Howe ver the node sets of G A and G B are the same. Remark 1 (Comparison of this paper to recent leader tracking results) . Almost all mechanical systems will have trajectories which satisfy the mild Assumption 1. In comparison, mor e r estrictive assumption ar e made on the leader trajectory in [13], [20]. In [13], [20], the leader trajectory is describable by an LTI system, with system matrix defined as S . In [13], it is assumed that all eigen values of S ar e pur ely imaginary . In [20], it is assumed that S is marginally stable. Mor e importantly , both [13] and [20] assume that S is known to all agents , which is a highly r estrictive assumption. As will become appar ent in the sequel, we use a distributed observer to allow every agent to obtain q 0 ( t ) and ˙ q 0 ( t ) pr ecisely . The work [12] has similar assumptions to this paper , but uses an adaptive algorithm and is ther efore fundamentally differ ent to the model-independent contr oller studied in this paper . I I I . L E A D E R T R AC K I N G O N F I X E D D I R E C T E D N E T W O R K S A. F inite-T ime Distributed observer Before we sho w the main result, we detail a distributed finite-time observer dev eloped in [29] which allows each follower agent to obtain q 0 and ˙ q 0 . Let b r i and b v i be the i th agent’ s estimated values for the leader position and velocity respectiv ely . Agent i ∈ { 1 , . . . , n } runs the observer ˙ b r i = b v i − ω 1 sgn  X j ∈N Bi ( t ) b ij ( t )( b r i − b r j )  (14a) ˙ b v i = − ω 2 sgn  X j ∈N Bi ( t ) b ij ( t )( b v i − b v j )  (14b) where b ij are the elements of the adjacency matrix associated with graph G B ( t ) and ω 1 , ω 2 > 0 are internal gains of the observer . Clearly , if a i 0 > 0 then agent i can directly sense the leader , v 0 and thus learns of q 0 and ˙ q 0 . F or such an agent i , we set b i 0 > 0 and b r 0 ( t ) = q 0 ( t ) and b v 0 ( t ) = ˙ q 0 ( t ) ; agent i still runs the distributed observer (14). W e no w give a theorem for con vergence of the observer , and explain below why all followers execute (14) ev en if they learn of q 0 , ˙ q 0 from G A . Theorem 1 (Theorem 4.1 of [29]) . Suppose that the leader trajectory q 0 ( t ) satisfies Assumption 1. If at every t , G B ( t ) contains a directed spanning tr ee, and ω 2 > k q /n then, for some T 1 < ∞ , there holds b r i ( t ) = q 0 ( t ) and b v i ( t ) = ˙ q 0 ( t ) for all i ∈ { 1 , . . . , n } , for all t ≥ T 1 . The key reason for agent i to run the distributed observer ev en if a i 0 > 0 (and thus agent i knows q 0 and ˙ q 0 ) is to ensure rob ustness to network changes o ver time (e.g. switching 5 topology due to loss of connection). W e elaborate further . In the case of a fixed G A , then agent i will know q 0 and ˙ q 0 for all t and there will be no need for the observer . Howe ver , we explore switching G A ( t ) in Section IV. Consider the case where a i 0 ( t ) = 1 for t ∈ [0 , 10) and a i 0 ( t ) = 0 for t ∈ [10 , ∞ ) . If agent i does not run (14), then for t ≥ 10 , it would not know q 0 ( t ) and ˙ q 0 ( t ) because a i 0 ( t ) = 0 ⇒ b i 0 ( t ) = 0 . If agent i runs (14) from t = 0 , then for all t ≥ T 1 , it is guaranteed that b r 1 ( t ) = q 0 ( t ) and b v 1 ( t ) = ˙ q 0 ( t ) even if G A ( t ) switches, so long as the connectivity condition in Theorem 1 is satisfied. A second reason is that agent i may acquire states by sensing over G A ; (14) acts as a filter for noisy measurements. B. Model-Independent Contr ol Law Consider the following algorithm for the i th agent τ i = − η X j ∈N Ai a ij  ( q i − q j ) + µ ( ˙ q i − ˙ q j )  − β sgn (( q i − b r i ) + µ ( ˙ q i − b v i )) (15) where a ij is the weighted ( i, j ) entry of the adjacency matrix A associated with the weighted directed graph G A . The control gains µ, η and β are strictly positive constants and their design will be specified later . For simplicity , it is assumed that η > 1 . Note that for all i , for all t > T 1 , b r i is replaced with q 0 and b v i replaced with ˙ q 0 . Let us denote the new error v ariable e q i = q i − q 0 . Let e q = [ e q > 1 , ..., e q > n ] > ∈ R np × 1 the stacked column vector of all e q i . The leader tracking objective is therefore achiev ed if e q ( t ) = ˙ e q ( t ) = 0 as t → ∞ . W e denote g = [ g > 1 , ..., g > n ] > , ζ = [ ζ > 1 , ..., ζ > n ] > , q = [ q > 1 , ..., q > n ] > , and ˙ q = [ ˙ q > 1 , ..., ˙ q > n ] > as the np × 1 stacked column vectors of all g i , ζ i , q i and ˙ q i respectiv ely . Let M ( q ) = diag [ M 1 ( q 1 ) , ..., M n ( q n )] ∈ R np × np , and C ( q , ˙ q ) = diag [ C 1 ( q 1 , ˙ q 1 ) , ..., C n ( q n , ˙ q n )] ∈ R np × np . Since M i > 0 , ∀ i then M is also symmetric posi- tiv e definite. Define an error vector , e i = b r i − q 0 , ∀ i = 1 , ..., n and ˙ e i = b v i − ˙ q 0 . Define e = [ e > 1 , ..., e > n ] > ∈ R np × 1 , ˙ e = [ ˙ e > 1 , ..., ˙ e > n ] > ∈ R np × 1 . The definition of e q i yields M i ¨ e q i = M i ¨ q i − M i ¨ q 0 and combining the agent dynamics (13) and the control law (15), the closed-loop system for the follower network, with nodes v 1 , . . . , v n , can be expressed as ¨ e q ∈ a.e. K  − M − 1 [ C ˙ e q + η ( L 22 ⊗ I p )( e q + µ ˙ e q ) + g + ζ + β sgn ( s + µ ˙ s ) + M ( 1 n ⊗ ¨ q 0 ) + C ( 1 n ⊗ ˙ q 0 )]  (16) where K denotes the differential inclusion, a.e. stands for “almost everywhere” and s = e q − e . Here, L 22 is the lower block matrix of L A as partitioned in (12). Filippov solutions of e q and ˙ e q for (16) exist because the signum function is measurable and locally essentially bounded, and e q and ˙ e q are absolutely continuous functions of time [30]. C. An Upper Bound Using Initial Conditions Before proceeding with the main proof, we calculate an upper bound (which may not be tight) on the initial states expressed as k e q (0) k 2 < X and k ˙ e q (0) k 2 < Y using Assump- tion 2. In the sequel, we show that these bounds hold for all time, and exponential conv ergence results. In keeping with the model-independent approach, define ¯ V µ =  e q ˙ e q  >  η λ max ( X ) I np 1 2 µ − 1 ( k M + δ ) I np 1 2 µ − 1 ( k M + δ ) I np 1 2 ( k M + δ ) I np   e q ˙ e q  (17) where 1 , X = ( Γ L 22 + L > 22 Γ ) ⊗ I p > 0 from Lemma 4 and the constant δ > 0 is sufficiently small such that k m − δ > 0 . W ithout loss of generality , assume that Γ is scaled such that ¯ γ = 1 . Let the matrix in (17) be L µ . By observing that ( k M + δ ) I np > M , then according to Lemma 1, L µ > 0 if and only if η λ max ( X ) I np − 1 2 µ − 2 ( k M + δ ) I np > 0 . It follows that L µ > 0 for any µ ≥ µ ∗ 1 where µ ∗ 1 > p ( k M + δ ) / 2 λ max ( X ) . Since X > 0 , such a µ ∗ 1 always exists. For conv enience, we use ¯ V µ ( t ) to denote ¯ V µ ( e q ( t ) , ˙ e q ( t )) , and observe that there holds ¯ V µ ( t ) ≤ η λ max ( X ) k e q k 2 2 + ( k M + δ )  1 2 k ˙ e q k 2 2 + µ − 1 k e q k 2 k ˙ e q k 2  (18) for all t . Next, define V µ = 1 2  e q ˙ e q  >  1 2 η λ min ( X ) I np µ − 1 γ ( k m − δ ) I np µ − 1 γ ( k m − δ ) I np γ ( k m − δ ) I np   e q ˙ e q  (19) Call the matrix in (19) N µ . Similarly to above, use Lemma 1 to show that N µ > 0 for any µ ≥ µ ∗ 2 where µ ∗ 2 > q 2 γ ( k m − δ ) /λ min ( X ) . Set µ ∗ 3 = max { µ ∗ 1 , µ ∗ 2 } . Define ρ 1 ( µ ) = η λ max ( X ) − 1 2 µ − 2 ( k M + δ ) (20a) ρ 2 ( µ ) = η 1 4 λ min ( X ) − 1 2 µ − 2 γ ( k m − δ ) (20b) and verify that ρ 1 ( µ ∗ 3 ) > ρ 2 ( µ ∗ 3 ) . Note that for any µ ≥ µ ∗ 3 there holds ¯ V µ ≤ ¯ V µ ∗ 3 and ρ i ( µ ∗ 3 ) ≤ ρ i ( µ ) , i = 1 , 2 . Compute ¯ V ∗ = η λ max ( X ) k 2 a + 1 2 ( k M + δ ) k 2 b + µ ∗ 3 − 1 ( k M + δ ) k a k b From Assumption 2, one has that k e q (0) k 2 ≤ k a and k ˙ e q (0) k 2 ≤ k b . Thus, one concludes from (18) and the abov e equation that there holds ¯ V µ (0) ≥ ¯ V ∗ for any µ ≥ µ ∗ 3 . Because we assumed η > 1 , it follows from Lemma 2 and (3a) that k e q (0) k 2 ≤ s ¯ V µ (0) ρ 1 ( µ ) ≤ s ¯ V µ (0) ρ 1 ( µ ∗ 3 ) < s ¯ V ∗ (0) ρ 2 ( µ ∗ 3 ) , X 1 (21) Follo wing a similar method yields Y 1 . Next, compute b V ∗ = η λ max ( X ) X 1 2 + 1 2 ( k M + δ ) Y 1 2 + ( µ ∗ 3 ) − 1 ( k M + δ ) X 1 Y 1 and observe that ¯ V ∗ ≤ b V ∗ . Lastly , compute the bound X = q b V ∗ /ρ 2 ( µ ∗ 3 ) (22) and notice that k e q (0) k 2 ≤ X 1 ≤ X . Similarly , Y is obtained using (3b), with the steps omitted due to spatial limitations. Because both sides of (22) are independent of µ , the v alues Y and X do not change for all µ ≥ µ ∗ 3 . 1 Note that ¯ V µ is not a L yapunov function 6 D. Stability Pr oof Theorem 2. Suppose that the conditions in Theor em 1 are satisfied. Under Assumptions 1 and 2, the leader-trac king is achie ved exponentially fast if 1) the network G A contains a dir ected spanning tree with the leader as the r oot node, and 2) the contr ol gains µ, η , β satisfy a set of lower bounding inequalities 2 . F or a given G A containing a dir ected spanning tr ee, there always exists µ, η , β which satisfy the inequalities. Pr oof. The proof will be presented in four parts. In P art 1 , we study a L yapunov-like candidate function V . In P art 2 , we analyse ˙ V and show that it is upper bounded. P art 3 shows that the system trajectory remains bounded for all time, and exponential con vergence is proved in P art 4 . P art 1: Consider the L yapunov-lik e candidate function V = 1 2 η e q > X e q + µ − 1 e q > Γ p M ˙ e q + 1 2 ˙ e q > Γ p M ˙ e q = V 1 + V 2 + V 3 with X giv en below (17), and Γ p = Γ ⊗ I p . Observe that V =  e q ˙ e q  >  1 2 η X 1 2 µ − 1 Γ p M 1 2 µ − 1 Γ p M 1 2 Γ p M   e q ˙ e q  (23) Call the matrix in (23) H µ . From Lemma 1, and the assumed properties of M i , there holds H µ > 0 if and only if η X − µ − 2 Γ p M > 0 , which is implied by λ min ( X ) − µ − 2 k M > 0 . This is because k M ≥ sup q λ max ( M ) , and we assumed that η > 1 and ¯ γ = 1 . For any µ ≥ µ ∗ 4 , where µ ∗ 4 > q 2 k M /λ min ( X ) (24) there holds L µ > H µ > N µ > 0 because µ ∗ 4 ≥ µ ∗ 3 as defined below (19). Thus, although the eigen values λ i ( H µ ) depend on q ( t ) , there holds λ min ( N µ ) ≤ λ i ( H µ ) ≤ λ max ( L µ ) for all i , and for all t ≥ 0 . Thus, for any µ ≥ µ ∗ 4 , V > 0 and is radially unbounded. For simplicity , let V ( t ) denote V ( e q ( t ) , ˙ e q ( t )) and observe that V ( t ) < ¯ V µ ( t ) , ∀ t because V ( t ) ≤ 1 2 η λ max ( X ) k e q ( t ) k 2 2 + 1 2 k M k ˙ e q ( t ) k 2 2 + µ − 1 k M k e q ( t ) k 2 k ˙ e q ( t ) k 2 (25) P art 2: Let ˙ V be the set-valued deriv ativ e of V with respect to time, along the trajectories of the system (16). From (2) we obtain ˙ V = ˙ V 1 + ˙ V 2 + ˙ V 3 . W e obtain ˙ V 1 = η e q > X ˙ e q . The second summand yields ˙ V 2 ∈ µ − 1 ˙ e q > Γ p M ˙ e q + µ − 1 e q > Γ p ˙ M ˙ e q + µ − 1 e q > Γ p M × K h ¨ e q i Substituting ¨ e q from (16), and using Assumption P3, we obtain ˙ V 2 ∈ K h − e q > ( Γ p L 22 ⊗ I p )( µ − 1 η e q + η ˙ e q ) + µ − 1 ˙ e q > Γ p M ˙ e q + µ − 1 e q > Γ p C > ˙ e q − µ − 1 e q > Γ p (∆ + C ( 1 n × ˙ q 0 )) − β µ − 1 e q > Γ p sgn ( s + µ ˙ s ) i (26) where ∆ = g + ζ + M ( 1 n × ¨ q 0 ) . Similarly , ˙ V 3 is ˙ V 3 ∈ ˙ e q > Γ p M × K h ¨ e q i + 1 2 ˙ e q > Γ p ˙ M ˙ e q (27) 2 In Remark 3, we detail an approach for designing the gains. Substituting ¨ e q from (16) and using Assumption P3 we obtain ˙ V 3 ∈ K h − η ˙ e q > ( Γ L 22 ⊗ I p ) e q − µη ˙ e q > ( Γ L 22 ⊗ I p ) ˙ e q − β ˙ e q > Γ p sgn ( s + µ ˙ s ) − ˙ e q > Γ p (∆ + C ( 1 n × ˙ q 0 )) i (28) When combining ˙ V ∈ ˙ V 1 + ˙ V 2 + ˙ V 3 notice that ˙ V 1 , the term − e q > ( Γ L 22 ⊗ I p ) ˙ e q of (26) and the first summand of (28) cancel. Let x = e q + µ ˙ e q and y = e + µ ˙ e . Recalling the definition of s = e q − e , we thus have ˙ V ∈ − µ − 1 K h 1 2 η e q > X e q + 1 2 µ 2 η ˙ e q > X ˙ e q − ˙ e q > Γ p M ˙ e q − e q > Γ p C > ˙ e q + x > Γ p C ( 1 ⊗ ˙ q 0 ) − x > Γ p ∆ − β x > Γ p sgn ( x − y ) i (29) From the bounds on g , M and 1 ⊗ ¨ q 0 , and because we normalised ¯ γ = 1 , it follows that x > Γ p ∆ ≤ ξ k x k 2 where ξ = k g + k ζ + k M k q . From Assumption P4, the property of norms and the definition of e q , it follo ws that k C k 2 = k C > k 2 ≤ k C k ˙ q k 2 ≤ k C ( k ˙ e q k 2 + k p ) . Thus e q > Γ p C > ˙ e q ≤ k C k p k e q k 2 k ˙ e q k 2 + k C k e q k 2 k ˙ e q k 2 2 (30a) ( e q + µ ˙ e q ) > Γ p C ( 1 ⊗ ˙ q 0 ) ≤ k C k p k e q k 2 k ˙ e q k 2 + µk C k p k ˙ e q k 2 2 + µk C k 2 p k µ − 1 e q + ˙ e q k 2 (30b) Let ϕ ( µ, η ) = 1 2 µ 2 η λ min ( X ) − µk C k p − k M . Define the functions ˙ V A (absolutely continuous) and ˙ V B (set-valued) as ˙ V A = − µ − 1  ϕ ( µ, η ) k ˙ e q k 2 2 + 1 2 η λ min ( X ) k e q k 2 2 − 2 k C k p k e q k 2 k ˙ e q k 2 − k C k e q k 2 k ˙ e q k 2 2  , − µ − 1 g ( k e q k 2 , k ˙ e q k 2 ) (31a) ˙ V B ∈ µ − 1 K h − β x > Γ p sgn ( x − y ) + k C k 2 p k x k 2 + ξ k x k 2 i (31b) By applying the inequalities in (30), and the eigen v alue inequalities noted in Section II-A, we conclude that ˙ V ≤ ˙ V A + ˙ V B (32) P art 3: In P art 3.1 , we study ˙ V A and ˙ V B separately to establish negati ve definiteness properties. Then, P art 3.2 studies ˙ V A + ˙ V B and proves a boundedness property . P art 3.1: Consider the region of the state variables giv en by k e q ( t ) k 2 ∈ [0 , X ] and k ˙ e q ( t ) k 2 ∈ [0 , ∞ ) where X > 0 was computed in Section III-C. One can compute a µ ∗ 5 ≥ µ ∗ 4 and η ∗ 1 such that ϕ ( µ, η ) > 0 , ∀ µ ≥ µ ∗ 5 , η ≥ η ∗ 1 . Note that L µ > H µ > N µ > 0 continues to hold. Observe that g ( k e q k 2 , k ˙ e q k 2 ) in (31a) is of the same form as g ( x, y ) in Lemma 3 with x = k e q k 2 and y = k ˙ e q k 2 . With b = ϕ ( µ, η ) > 0 , check if the inequality ϕ ( µ ∗ 5 , η ∗ 1 ) > (2 k C k p ) 2 2 η ∗ 1 λ min ( X ) + k C X holds. If the inequality holds then ˙ V A in (31a) is negati ve definite in the region and proceed to P art 3.2 Howe ver , if the inequality does not hold, then there exists a µ ∗ 6 ≥ µ ∗ 5 and η ∗ 2 ≥ η ∗ 1 such that ϕ ( µ ∗ 6 , η ∗ 2 ) > (2 k C k p ) 2 2 η ∗ 2 λ min ( X ) + k C X (33) 7 Recall from (17) and (22) that b V ∗ is dependent on η , but independent of µ because µ ∗ 6 ≥ µ ∗ 3 . One could leav e η ∗ 2 = η ∗ 1 and find a sufficiently large µ ∗ 6 to satisfy (33). Alternatively , we could increase η . Notice that ρ 2 ( µ ∗ 3 ) and b V ∗ are both of O ( η ) . Thus, as η increases, X becomes independent of η , whereas ϕ = O ( η ) . W e conclude that there exists a sufficiently large η ∗ 2 satisfying (33), and for which X 1 , Y 1 , X , Y need not be recomputed. W ith µ ∗ 6 , η ∗ 2 satisfying (33), ˙ V A < 0 in the aforementioned region. Now consider ˙ V B ov er two time intervals, t P = [0 , T 1 ) and t Q = [ T 1 , T 2 ) , where T 1 is giv en in Theorem 1 and T 2 is the infimum of those v alues of t for which one of the inequalities k e q ( t ) k 2 < X , k ˙ e q ( t ) k 2 < Y fails. In P art 3.2 , we argue that without loss of generality , it is possible to take T 2 > T 1 . In fact, we establish that the inequalities never fail; T 2 does not exist and thus t Q = [ T 1 , ∞ ) 3 . Consider firstly t ∈ t P . Observe that the set-v alued function − β x > Γ p sgn ( x − y ) is upper bounded by the single-v alued function β k x k 1 . Recalling ˙ V B in (31b) yields ˙ V B ≤ ( √ nβ + k C k 2 p + ξ )( µ − 1 k e q k 2 + k ˙ e q k 2 ) := ˙ V B (34) because k · k 2 ≤ k · k 1 ≤ √ n k · k 2 [27]. For t ∈ t Q , Theorem 1 yields that e ( t ) = ˙ e ( t ) = 0 , which implies that y = 0 . Thus, the set-valued term K [ x > Γ p sgn ( x − y )] in (31b) becomes the singleton K [ x > Γ p sgn ( x )] = {k Γ p x k 1 } (since Γ p > 0 is diagonal). It then follows that ˙ V B = − µ − 1 β k Γ p x k 1 + k C k 2 p k x k 2 + ξ k x k 2 (35) In other words, ˙ V B for t ∈ t Q is a continuous, single-valued function in the variables e q and ˙ e q . For t ∈ t Q , we observe that ˙ V B ≤ − µ − 1 ( β γ − k C k 2 p − ξ ) k e q + µ ˙ e q k 1 < 0 if β > ( k C k 2 p + ξ ) /γ (36) P art 3.2: T o aid in this part of the proof, refer to Figure 1. Consider firstly ˙ V for t ∈ t P . Specifically , let ˙ V t P , ˙ V A + ˙ V B , which giv es ˙ V t P = − µ − 1 h ϕ ( µ, η ) k ˙ e q k 2 2 + 1 2 η λ min ( X ) k e q k 2 2 − 2 k C k p k e q k 2 k ˙ e q k 2 − k C k e q k 2 k ˙ e q k 2 2 − ( √ nβ γ + k C k 2 p + ξ )  k e q k 2 + µ k ˙ e q k 2  i , − µ − 1 p ( k e q k 2 , k ˙ e q k 2 ) (37) Note that ˙ V ≤ ˙ V t P , i.e. ˙ V for t ∈ t P is a dif ferential inclusion which is upper bounded by a continuous function. Observ e that p ( k e q k 2 , k ˙ e q k 2 ) is of the form of h ( x, y ) in Corollary 1 with x = k e q k 2 and y = k ˙ e q k 2 . Here, b = ϕ ( µ, η ) , a = 1 2 η λ min ( X ) , c = k C , d = 2 k C k p , e = ( √ nβ + k C k 2 p + ξ ) and f = µe . Thus, for some giv en ϑ, ε, X , Y satisfying the requirements detailed in Corollary 1, one can use (9) and (11) to find a µ, η such that p ( k e q k 2 , k ˙ e q k 2 ) is positive definite in the region R . Note that ϑ, ε can be selected by the designer . Choose ϑ > X − X 1 and ε > Y − Y 1 , and ensure that X − ϑ, Y − ε > 0 . Note the fact that X ≥ X 1 and Y ≥ Y 1 implies ϑ, ε > 0 . 3 Establishing that T 2 does not exist rules out the possibility of finite-time escape for system (13). Define the sets U , V and the region R as in Corollary 1 with x = k e q k 2 and y = k ˙ e q k 2 . Define further sets ¯ U = {k e q k 2 : k e q k 2 > X } and ¯ V = {k ˙ e q k 2 : k ˙ e q k 2 > Y } . Define the compact region S = U ∪ V \ ¯ U ∪ ¯ V , see Fig. 1 for a visualisation of S . Note S ⊂ R . Using Corollary 1, and with precise calculation details giv en in [31, Theorem 2], one can find a pair of gains η and µ which ensures that p ( k e q k 2 , k ˙ e q k 2 ) is positiv e definite in R . This implies p ( k e q k 2 , k ˙ e q k 2 ) is positive definite in S . It follows that ˙ V t P is negati ve definite in S . Further define the region k e q ( t ) k 2 ∈ [0 , X − ϑ ) , k ˙ e q ( t ) k 2 ∈ [0 , Y − ε ) as T , again with visualisation in Fig 1. Now we justify the fact that we can assume T 2 > T 1 . In fact, in doing so, we show that the existence of T 2 creates a contradiction; the trajectories of (16) remain in T ∪ S for all time. See Fig 1 for a visualisation. Although ˙ V is sign indefinite in T (i.e. V ( t ) can increase), notice from (25) that, in T there holds V ( t ) ≤ 1 2 η λ max ( X )( X − ϑ ) 2 + 1 2 k M ( Y − ε ) 2 + µ − 1 k M ( X − ϑ )( Y − ε ) := Z (38) Recalling that δ > 0 and is arbitrarily small, one can easily verify that Z < b V ∗ because we selected ϑ, ε such that X 1 > X − ϑ and Y 1 > Y − ε . In addition, recall ˙ V is negati ve definite in S and now observe the following facts. For any trajectory starting in S that enters T at some time ¯ t < T 2 , there holds V ( t ) < V (0) for all t ≤ ¯ t . Any trajectory starting in S that stays in S for all t up to T 2 satisfies V ( t ) < V (0) . Any trajectory in T satisfies V ( t ) < Z . If any trajectory leav es T and enters S at some ˆ t < T 2 , we observe that the crossover point is in the closure of T . Because V is continuous (since the Filippov solutions for e q , ˙ e q are absolutely continuous), we have V ( ˆ t ) ≤ Z . Because the trajectory enters S , where ˙ V < 0 , we also hav e V ( ˆ t + δ 1 ) < V ( ˆ t ) ≤ Z , for some arbitrarily small δ 1 . This implies that all trajectories of (16) beginning 4 in T ∪ S satisfy V ( t ) ≤ max {Z , V (0) } < b V ∗ for all t ≤ T 2 . On the other hand, at T 2 , and in accordance with Lemma 2, there holds k e q ( T 2 ) k 2 ≤ s V ( T 2 ) χ < s b V ∗ χ < s b V ∗ ρ 2 ( µ ∗ 3 ) = X (39) where χ = λ min ( 1 2 η X − 1 2 µ − 2 Γ p M ) > ρ 2 ( µ ∗ 3 ) . One can also show that k ˙ e q ( T 2 ) k 2 < Y using an argument paralleling the argument leading to (39); we omit this due to spatial limitations. The existence of (39) and a similar inequality for k ˙ e q ( T 2 ) k 2 contradicts the definition of T 2 . In other words, T 2 does not exist and k e q ( t ) k 2 < X , k ˙ e q ( t ) k 2 < Y hold for all t . P art 4: Observe that ˙ V B changes at t = T 1 to become negati ve definite. Consider now t ∈ t Q = [ T 1 , T 2 ) . Recalling that ˙ V ≤ ˙ V A + ˙ V B , we have ˙ V ≤ − µ − 1 h ϕ ( µ, η ) k ˙ e q k 2 2 + 1 2 η λ min ( X ) k e q k 2 2 − 2 k C k p k e q k 2 k ˙ e q k 2 − k C k e q k 2 k ˙ e q k 2 2  − ( β − k C k 2 p − ξ ) k e q + µ ˙ e q k 1 i < 0 (40) 4 It is evident from (21) that e q (0) , ˙ e q (0) ∈ S ∪ T . 8 k e q ( t ) k k ˙ e q ( t ) k X Y X − ϑ Y − ε T S T 2 Figure 1. Diagram for P art 3 of the proof of Theorem 2. The red region is S , in which ˙ V ( t ) < 0 for all t ≥ 0 . The blue region is T , in which ˙ V ( t ) is sign indefinite. A trajectory of (16) is shown with the black curve. W e define t = T 2 , if it exists, as the infimum of all t v alues for which one of the inequalities k e q ( T 2 ) k < X or k ˙ e q ( T 2 ) k < Y fails to hold, i.e. as the time at which the system (16) first leaves S . By contradiction, it is shown in P art 3.2 that the trajectory of (16) satisfies k e q ( T 2 ) k < X , k ˙ e q ( T 2 ) k < Y . I.e., T 2 does not exist and the trajectory remains in T ∪ S , ∀ t . The sign indefiniteness of ˙ V in T arises due to terms linear in k e q k , k ˙ e q k in (37). These terms disappear at t = T 1 , when the finite-time observer con verges. For all t > T 1 , ˙ V < 0 in T ∪ S as shown in P art 4 . Exponential con vergence to the origin follows. in the region D := S ∪ T . From the fact that k ˙ e q ( T 1 ) k 2 < Y , there holds ˙ V ( T 1 ) < 0 . The argument applied to the interval [0 , min { T 1 , T 2 } ] above, culminating in (39), is now applied to the interval t Q . Since ˙ V < 0 in D and at T 1 , the trajectory is in D , we have V ( T 1 ) < V ( T 2 ) < b V ∗ . It follows that (39) continues to hold (and equally for the argument regarding k ˙ e q k 2 ). It remains true that T 2 does not exist, implying that the trajectory of (16) remains in D and ˙ V < 0 for t ∈ [ T 1 , ∞ ) . Recall from belo w (24) that the eigen values of H µ are uniformly upper bounded away from infinity and lower bounded aw ay from zero by constants. Specifically , there holds λ min ( N µ ) k [ e q > , ˙ e q > ] > k 2 2 ≤ V ≤ λ max ( L µ ) k [ e q > , ˙ e q > ] > k 2 2 . Because D is compact, one can find a scalar a 3 > 0 such that ˙ V ≤ − a 3 k [ e q > , ˙ e q > ] > k 2 2 . It follows that ˙ V ≤ − [ a 3 /λ max ( L µ )] V in D . This inequality is used to conclude that V decays exponentially fast to zero, with a minimum rate e − a 3 /λ max ( L µ ) t [32]. Specifically , there holds k  e q ( t ) ˙ e q ( t )  k 2 ≤ λ max ( L µ ) λ min ( N µ ) k  e q (0) ˙ e q (0)  k 2 e − a 3 λ max ( L µ ) t (41) It follo ws that lim t →∞ e q ( t ) = 0 n and lim t →∞ ˙ e q ( t ) = 0 n exponentially and the leader tracking objecti ve is achie ved. Remark 2 (Additional degree of freedom in gain design) . In [25] we assumed that η = 1 . There is mor e flexibility in this paper since we allow η > 1 ; one can adjust separately , or simultaneously , µ and η . While the interplay between µ, η , β , and its effect on performance, is difficult to quantify , we observe fr om extensive simulations that in general, one should make β and µ as small as possible. Wher e possible, it is better to hold µ constant and incr ease η to satisfy an inequality in volving both, e.g (33) . Notice that λ max ( H µ ) and a 3 ar e both O ( η ) . Note that as µ incr eases λ max ( H µ ) does not incr ease but a 3 does decr ease. Thus, the con ver gence rate a 3 /λ max ( L µ ) is not negatively affected by incr easing η but is reduced by incr easing µ . If only µ is adjusted to be large (as in [25]) then velocity consensus is quickly achieved but position consensus is achie ved after a long time. Remark 3 (Designing the gains) . W e summarise her e the pr ocess to design µ, η , β to satisfy inequalities detailed in the pr oof of Theor em 2. F irst, one may select β to satisfy (36) . Then, µ should be set to satisfy (24) . The quantities X and Y discussed in Section III-C ar e then computed with η & 1 ; we noted below (33) that X and Y ar e independent of η as η increases. Having computed X and Y , the last step is to adjust η to ensure g ( k e q k 2 , k ˙ e q k 2 ) and p ( k e q k 2 , k ˙ e q k 2 ) are positive definite (see P art 3.1 of Theorem 2 pr oof). Details of the inequalities to ensur e positive definiteness ar e found in the pr oofs of Lemma 3 and Cor ollary 1 in [31, Section II-A]. Remark 4 (Robustness) . The pr oposed algorithm (15) is r obust in se veral aspects. F irst, the exponential stability pr operty implies that small amounts of noise pr oduce small departur es fr om the ideal. Mor eover , the signum term in the contr oller offers r obustness to the unknown disturbance ζ i ( t ) . In contrast, and as discussed in the introduction, adaptive contr ollers are not r obust to unmodelled agent dynamics. Remark 5 (Controller structure) . Consider the contr oller (15) . The term containing the signum function ensures exact trac king of the leader’ s tr ajectory . Consider F ig. 1. F or t < T 1 , the signum term results in the r egion T , wher e ˙ V is sign indefinite. This signum term can in fact drive an agent away fr om its neighbours due to the nonzer o err or term e i ( t ) , t < T 1 . However , for t < T 1 , the linear terms of the contr oller (and in particular adjustment of the gains η , µ ) ensur e that ˙ V < 0 in S . This ensures that the followers r emain in the bounded r e gion S centr ed on the leader . Such a contr oller gives added rob ustness. F or example, if G B becomes temporarily disconnected, all agents will r emain close to the leader so long as G A has a dir ected spanning tr ee. When connectivity of G B is r estor ed, perfect tracking follows; this is illustrated in the simulation below . E. Practical T rac king By Approximating the Signum Function Although the signum function term in (15) allows the leader - tracking objectiv e to be achiev ed, it carries an offsetting dis- advantage. Use of the signum function can cause mechanical components to fatigue due to the rapid switching of the control input. Moreover , chattering often results, which can excite the natural frequencies of high-order unmodelled dynamics. A modified controller is now proposed using a continuous approximation of the signum function and we derive an explicit upper bound on the error in tracking of the leader 5 . Consider the follo wing continuous, model-independent al- gorithm for the i th agent, replacing (15): τ i = − η X j ∈N Ai a ij  ( q i − q j ) + µ ( ˙ q i − ˙ q j )  − β z i  ( q i − b r i ) + µ ( ˙ q i − b v i )  (42) 5 W e do not consider an approximation for (14) because the observer in volves computing state estimates, as opposed to the physical control input for (13). 9 where z i ( x ) , x / ( k x k 2 +  ) with  > 0 being the degree of approximation. The function z i ( x ) approximates sgn ( x ) via the boundary layer concept [33]. The networked system is: M ¨ e q + C ˙ e q + η ( L 22 ⊗ I p )( e q + µ ˙ e q ) + g + ζ + β z ( s + µ ˙ s ) + M ( 1 n ⊗ ¨ q 0 ) + C ( 1 n ⊗ ˙ q 0 ) = 0 (43) where z ( s + µ ˙ s ) = [ z 1 ( s 1 + µ ˙ s 2 ) > , ..., z n ( s n + µ ˙ s n ) > ] > . Note that k z i ( x i ) k 2 < 1 for any  > 0 . The computation of the quantities X , Y in subsection III-C is unchanged. Because of similarity , we do not provide a complete proof here; a sketch is outlined and we leave the minor adjustments to the reader . Consider the same L yapunov- like function as in (23), with µ sufficiently large to ensure H µ > 0 . The deriv ative of (23) with respect to time, along the trajectories of (43), is giv en by ˙ V = − µ − 1 h 1 2 η e q > X e q + 1 2 µ 2 η ˙ e q > X ˙ e q − ˙ e q > Γ p M ˙ e q − e q > Γ p C > ˙ e q + x > Γ p  C ( 1 n ⊗ ˙ q 0 ) + ∆  + β n X i =1 γ i x > i z i ( x i + y i ) i (44) Let t P and t Q be defined as in P art 3.2 of the proof of Theorem 2. One can compute that, for t ∈ t P , there holds ˙ V ≤ − µ − 1 h ϕ ( µ, η ) k ˙ e q k 2 2 + 1 2 η λ min ( X ) k e q k 2 2 − 2 k C k p k e q k 2 k ˙ e q k 2 − k C k e q k 2 k ˙ e q k 2 2 − ( k C k 2 p + ξ ) k x k 2 − β n X i =1 k x i k 2 i ≤ − µ − 1 p ( k e q k 2 , k ˙ e q k 2 ) (45) where p ( · , · ) was defined in (37). This is because there holds k x > i z i ( x i + y i ) k 2 < k x i k 2 , and ¯ γ = 1 . In other words, any µ, η which ensures boundedness of the trajectories of (16), will also ensure that the trajectories of (43) remain bounded in S ∪ T for all time. Consider now t ∈ t Q , and observe that x > i z i ( x i ) = k x i k 2 2 / ( k x i k 2 +  ) . It follows that x > Γ p  C ( 1 n ⊗ ˙ q 0 ) + ∆  + β n X i =1 γ i x > i z i ( x i ) = n X i =1 γ i x > i ( C i ˙ q 0 + M i ¨ q 0 + g i + ζ i ) + β γ i x > i z i ( x i ) ≤ n X i =1 γ i h − ( k C k p 2 + ξ ) k x i k 2 + β k x i k 2 2 k x i k 2 +  i + k C k p k e q k 2 k ˙ e q k 2 + µk C k p k ˙ e q k 2 2 (46) which in turn yields ˙ V ≤ − µ − 1 h ϕ ( µ, η ) k ˙ e q k 2 2 + 1 2 η λ min ( X ) k e q k 2 2 − 2 k C k p k e q k 2 k ˙ e q k 2 − k C k e q k 2 k ˙ e q k 2 2 − n X i =1 ( k C k 2 p + ξ ) k x i k 2 + β γ n X i =1 k x i k 2 2 k x i k 2 +  i (47) If β satisfies (36) then there holds β γ n X i =1 k x k 2 2 k x i k 2 +  − n X i =1 ( k C k 2 p + ξ ) k x i k 2 ≥ β γ n X i =1 h k x i k 2 2 k x i k 2 +  − k x i k 2 i (48) = − β γ n X i =1 h k x i k 2  k x i k 2 +  i > − β γ n (49) because k x i k 2 / ( k x i k 2 +  ) < 1 for all  > 0 . From this, we conclude that ˙ V ≤ ˙ V A + β γ n . Recall also that any µ, η which ensures p ( · , · ) is positiv e definite in S also ensures that ˙ V A is negati ve definite in D . Similar to P art 4 of the proof of Theorem 2, one has ˙ V ≤ ψ V + β γ n , for some ψ > 0 . W e conclude using [32, Lemma 3.4 (Comparison Lemma)] that V ( t ) ≤ V (0) e − ψ t + β γ n Z t n e − ψ ( t − τ ) . d τ (50) ≤ e − ψ t  V (0) + β γ n/ψ  + β γ n/ψ (51) which implies that V ( t ) decays exponentially fast to the bounded set { [ e q > , ˙ e q > ] > : V ≤ β γ n/ψ } . From the fact that V ≥ λ min ( N µ ) k [ e q > , ˙ e q > ] > k 2 2 , we conclude that the trajectories of (43) con verge to the bounded set Ω =  [ e q > , ˙ e q > ] : k [ e q > , ˙ e q > ] k 2 ≤  β γ n ψ λ min ( N µ )  1 2  (52) I V . L E A D E R T R A C K I N G O N D Y N AM I C N E T W O R K S In this section, we consider the case where the sensing graph G A ( t ) is dynamic, i.e. time-varying. W e assume that there is a finite set J of m possible network topologies, giv en as ¯ G A = {G A,j = ( V , E j , A j ) : j ∈ J } , where J = { 1 , . . . , m } is the index set. W e assume further that G A,j , ∀ j , contains a directed spanning tree, with v 0 as the root node and with no edges incoming to v 0 . Define σ ( t ) : [0 , ∞ ) 7→ J as the piecewise constant switching signal which determines the switching of G A ( t ) , with a finite number of switches. The switching times are indexed as t 1 , t 2 , . . . and we assume that σ ( t ) is such that t i +1 − t i > π d > 0 for all i , where π d is the dwell time. The dynamic network is modelled by the graph G A ( t ) = G A,σ ( t ) , which in turn implies that the Laplacian associated with G A ( t ) is dynamic, given by L A ( t ) = L A,σ ( t ) . Denote L 22 ( t ) = L 22 ,σ ( t ) as the lower block matrix of L A ( t ) , partitioned as in (12). It is straightforward to show that the follower network dynamics is given by ¨ e q ∈ a.e. K h − M − 1  C ˙ e q + η ( L 22 ,σ ( t ) ⊗ I p )( e q + µ ˙ e q ) + g + ζ + β sgn ( s + µ ˙ s ) + M ( 1 n ⊗ ¨ q 0 ) + C ( 1 n ⊗ ˙ q 0 )  i (53) W e now seek to exploit an established result which states that a switched system is exponentially stable if the switching is sufficiently slow [34], and its ‘frozen’ versions of the various systems arising between switching instants are all exponentially stable. Specifically , the following result holds Theorem 3 ( [34, Theorem 3.2]) . Consider the family of systems ˙ x = f j ( x ) , j ∈ J . Suppose that, in a domain 10 D ⊆ R n containing x = 0 , ∃C 1 functions V j : D 7→ R , j ∈ J , and positive constants c j , d j , Λ j such that c j k x k 2 2 ≤ V j ( x ) ≤ d j k x k 2 2 , ∀ x ∈ D , ∀ j ∈ J (54) and ˙ V j ( x ) ≤ − Λ j V j ( x ) , ∀ x ∈ D , ∀ j ∈ J . Define κ , sup p,q ∈J { V p ( x ) /V q ( x ) : x ∈ D } , and suppose further that 0 < κ < 1 . Then, for x (0) ∈ D , the origin x = 0 of the switched system ˙ x = f σ ( t ) ( x ) is exponentially stable for every switching signal σ ( t ) with dwell time π d > log ( κ ) / Λ , wher e Λ = min j ∈J Λ j . Under Assumptions 1 and 2, we know from the previous Theorem 2 that for each j th subsystem, ¨ e q ∈ a.e. K h − M − 1  C ˙ e q + η ( L 22 ,j ⊗ I p )( e q + µ ˙ e q ) + g + ζ + β sgn ( s + µ ˙ s ) + M ( 1 n ⊗ ¨ q 0 ) + C ( 1 n ⊗ ˙ q 0 )  i (55) there exist control gains µ j , η j , β j which exponentially achieve the leader tracking objecti ve. In seeking to apply Theorem 3 to the system (53), we obtain, for each j ∈ J with V j giv en in (23), the v alues λ min ( N µ,j ) = c j , λ max ( L µ,j ) = d j and Λ j = a 3 ,j /λ max ( L µ,j ) where a 3 ,j was computed below (40). It follows that Λ = min j ∈J a 3 ,j /λ max ( L µ,j ) , and one can also obtain that κ = max j ∈J λ max ( L µ,j ) / min j ∈J λ min ( N µ,j ) . Theorem 4. Under Assumptions 1 and 2, with dynamic topol- ogy given by G A ( t ) = G A,σ ( t ) , the leader tracking objective is achieved using (15) if 1) the control gains µ, η , β satisfy a set of lower bounding inequalities, and 2) the dwell time π d satisfies the inequality π d > log( κ ) / Λ , wher e κ, Λ ar e as defined in the immediately preceding paragr aph. Pr oof. By selecting µ = max j ∈J µ j , η = max j ∈J η j , and β = max j ∈J β j , we guarantee each j th subsystem (55) is exponentially stable, and also guarantee the boundedness of the trajectories of (53) before the finite-time observer has con verged. After con vergence of the finite-time observer , ap- plication of Theorem 3 using the quantities of κ and Λ outlined abov e deliv ers the conclusion that (53) is exponentially stable, i.e. the leader tracking objective is achieved. V . S I M U L A T I O N S A simulation is now provided to demonstrate the algorithm (15). Each agent is a two-link robotic arm and fi ve follower agents must track the trajectory the leader agent. The equations of motion are given in [26, pp. 259-262]. The generalised coordinates for agent i are q i = [ q (1) i , q (2) i ] > , which are the angles of each link in radians. The agent parameters and initial conditions are given in T able I, and are chosen arbitrarily . Several aspects of the simulation are designed to highlight the robustness of the algorithm. First, the topology is assumed to be switching, with the graph G A ( t ) switching periodically between the three graphs indicated in Fig. 2, at a frequency of 1 Hz . Graph G B ( t ) switches between the three graphs indicated in Fig. 3, also at a frequency of 1 Hz . Moreov er, if G A ( t ) = G A,i then G B ( t ) = G B ,i for i = 1 , 2 , 3 . Additionally , G B ( t ) is entirely disconnected for t ∈ [10 , 20) of the simulation. Last, each agent has a disturbance ζ i ( t ) = [sin( i × 0 . 1 t ) , cos( i × 0 . 1 t )] > for i = 1 , . . . , 5 . All edges of L 1 2 3 4 5 G A 1 L 1 2 3 4 5 G A 2 L 1 2 3 4 5 G A 3 Figure 2. In the simulation, graph G A ( t ) switches between the abov e three graphs periodically at a rate of 1 Hz. G A ( t ) and G B ( t ) ha ve edge weights of 5 . The control gains are set as µ = 1 . 5 , η = 16 , β = 25 ; they are first computed using the inequalities and then adjusted because the inequalities can lead to conservati ve gain choices. For the observer , set ω 1 = 1 , ω 2 = 5 . The leader trajectory is q 0 ( t ) = " 0 . 5 sin( t ) − 0 . 2 sin(0 . 5 t ) 0 . 4  2 sin( t ) + sin(2 t ) (2) + sin(3 t ) (3) + sin(4 t ) (4)  # Figure 4 shows the generalised coordinates q (1) and q (2) . The generalised velocities, ˙ q (1) and ˙ q (2) are shown in Fig. 5. The well studied observer results are omitted [29]. Consider Fig. 4. Clearly , q i ( t ) has almost tracked the leader by t = 10 , but the distributed observer graph G B ( t ) disconnects for t ∈ [10 , 20) . As discussed in Remark 5, the controller (15) has robustness to network failure, since the linear term in (15) ensures the trajectories remain bounded as long as G B ( t ) is disconnected (thus followers do not possess accurate kno wledge of q 0 , ˙ q 0 ). In the simulation, we observe leader tracking is achiev ed once G B ( t ) reconnects at t > 20 . Figures 6 and 7 sho w the generalised coordinates and generalised velocity , respectively , for the same simulation set up but with an increase of µ = 4 from µ = 1 . 5 . The effects are clear , when we compare to Fig. 4 and 5. First, the rate of velocity synchronisation relati ve to the rate of position synchronisation is much larger when µ = 4 . On the other hand, overall conv ergence rate is decreased; it takes longer for position and velocity synchronisation to occur , with reasons presented in Remark 2. Howe ver , the increased µ has a benefit of making the follower agents stay in a smaller ball around the leader when G B ( t ) is disconnected for t ∈ [10 , 20) , i.e. the tracking error for t ∈ [10 , 20) is smaller . This is because increasing µ decreases the size of T , where ˙ V is sign indefinite, as shown in Fig. 1. Last, we show a simulation which utilises the continuous approximation algorithm (42). The simulation setup is giv en above, and we let  = 0 . 5 . Figure 8 shows the generalised coordinates of the resulting simulation, with a magnification of the plot for the final 2 seconds of simulation. One can clearly see that practical tracking is achie ved, with a small error . The velocity plot is omitted. V I . C O N C L U S I O N In this paper , a distributed, discontinuous model- independent algorithm was proposed for a directed network of Euler-Lagrange agents. It was shown that the leader tracking objective is achie ved semi-globally exponentially fast if the directed graph contains a directed spanning tree, rooted at the leader, and if three control gains satisfied a set 11 T able I A G E N T PA R AM E T E RS U SE D I N SI M U L A T IO N m 1 m 2 l 1 l 2 l c 1 l c 2 I 1 I 2 q (1) i (0) q (2) i (0) ˙ q (1) i (0) ˙ q (2) i (0) Agent 1 0.5 0.4 0.4 0.3 0.2 0.15 0.1 0.05 0.1 0.9 -0.5 -0.6 Agent 2 0.2 0.4 0.6 0.1 0.35 0.08 0.15 0.08 -0.4 0.9 0.1 -1.4 Agent 3 0.5 0.4 0.4 0.3 0.2 0.15 0.1 0.05 0.9 -1.2 0.3 0.6 Agent 4 1 0.6 0.45 0.8 0.2 0.4 0.15 0.5 -2.0 -2.0 -1.0 0.2 Agent 5 0.25 0.4 0.8 0.5 0.3 0.1 0.45 0.15 0.3 1.5 1.0 1.2 L 1 2 3 4 5 G B 1 L 1 2 3 4 5 G B 2 L 1 2 3 4 5 G B 3 Figure 3. Graph G B ( t ) switches between the abo ve three graphs periodically at a rate of 1 Hz; if G A ( t ) = G A,i then G B ( t ) = G B ,i for i = 1 , 2 , 3 . 0 5 10 15 20 25 30 35 40 45 T i m e ( s ) -2 -1 0 1 agent1 agent2 agent3 agent4 agent5 Leader 0 5 10 15 20 25 30 35 40 45 Time (s) -2 0 2 Figure 4. Plot of generalised coordinates vs. time; the graph G B ( t ) is disconnected for t ∈ [10 , 20) . of lower bounding inequalities. The algorithm was shown to be robust to agent disturbances, unmodelled agent dynamics and modelling uncertainties. A continuous approximation of the algorithm was proposed to a void chattering, and we then extended the result to include switching topologies. A numerical simulation illustrated the algorithm’ s effecti veness. R E F E R E N C E S [1] W . Ren and Y . Cao, Distributed Coor dination of Multi-agent Networks: Emer gent Problems, Models and Issues . Springer London, 2011. [2] R. Ortega, J. A. L. Perez, P . J. Nicklasson, and H. Sira-Ramirez, P assivity-based Contr ol of Euler-La grange systems: Mechanical, Elec- trical and Electromec hanical Applications . Springer Science & Busi- ness Media, 2013. [3] S.-J. Chung and J.-J. Slotine, “Cooperative Robot Control and Con- current Synchronization of Lagrangian Systems, ” IEEE T ransactions on Robotics , vol. 25, no. 3, pp. 686–700, 2009. [4] Q. Hu, B. Xiao, and P . Shi, “T racking control of uncertain euler–lagrange systems with finite-time conv ergence, ” International Journal of Robust and Nonlinear Control , vol. 25, no. 17, pp. 3299–3315, 2015. [5] Q. Y ang, H. Fang, J. Chen, Z. Jiang, and M. Cao, “Distributed Global Output-Feedback Control for a Class of Euler-Lagrange Systems, ” IEEE T ransactions on Automatic Contr ol , 11 2017. [6] Z. Meng, Z. Lin, and W . Ren, “Leader–F ollower Swarm T racking for Networked Lagrange Systems, ” Systems & Control Letters , vol. 61, no. 1, pp. 117–126, 2012. 0 5 10 15 20 25 30 35 40 45 T i m e ( s ) -1 0 1 2 agent1 agent2 agent3 agent4 agent5 Leader 0 5 10 15 20 25 30 35 40 45 Time (s) -2 0 2 Figure 5. Plot of generalised velocity vs. time; the graph G B ( t ) is discon- nected for t ∈ [10 , 20) . 0 5 10 15 20 25 30 35 40 45 T i m e ( s ) -2 -1 0 1 agent1 agent2 agent3 agent4 agent5 Leader 0 5 10 15 20 25 30 35 40 45 Time (s) -2 0 2 Figure 6. Plot of generalised coordinates vs. time; the graph G B ( t ) is disconnected for t ∈ [10 , 20) . The gain µ has been increased from µ = 1 . 5 to µ = 4 . [7] J. Mei, W . Ren, and G. Ma, “Distributed Containment Control for Lagrangian Networks With Parametric Uncertainties Under a Directed Graph, ” Automatica , vol. 48, no. 4, pp. 653–659, 2012. [8] A. Abdessameud, I. G. Polushin, and A. T ayebi, “Synchronization of lagrangian systems with irregular communication delays, ” IEEE T ransactions on Automatic Control , vol. 59, no. 1, pp. 187–193, January 2014. [9] G. Chen and F . L. Lewis, “Distributed Adaptive Tracking Control for Synchronization of Unknown Networked Lagrangian Systems, ” IEEE T ransactions on Systems, Man, and Cybernetics, P art B: Cybernetics , vol. 41, no. 3, pp. 805–816, 2011. [10] E. Nuno, R. Ortega, L. Basanez, and D. Hill, “Synchronization of Networks of Nonidentical Euler-Lagrange Systems with Uncertain Pa- rameters and Communication Delays, ” IEEE T ransactions on A utomatic Contr ol , vol. 56, no. 4, pp. 935–941, 2011. [11] Z. Meng, D. V . Dimarogonas, and K. H. Johansson, “Leader–Follo wer Coordinated Tracking of Multiple Heterogeneous Lagrange Systems Using Continuous Control, ” IEEE T ransactions on Robotics , vol. 30, 12 0 5 10 15 20 25 30 35 40 45 T i m e ( s ) -1 0 1 agent1 agent2 agent3 agent4 agent5 Leader 0 5 10 15 20 25 30 35 40 45 Time (s) -1 0 1 2 Figure 7. Plot of generalised velocity vs. time; the graph G B ( t ) is discon- nected for t ∈ [10 , 20) . The gain µ has been increased from µ = 1 . 5 to µ = 4 . 0 5 10 15 20 25 30 35 40 45 -2 -1 0 1 agent1 agent2 agent3 agent4 agent5 Leader 0 5 10 15 20 25 30 35 40 45 Time (s) -2 0 2 43 44 45 0.1 0.2 0.3 0.4 0.5 43 44 45 0.5 0.6 0.7 0.8 0.9 Figure 8. Plot of generalised coordinates vs. time; the graph G B ( t ) is disconnected for t ∈ [10 , 20) . The continuous approximation algorithm (42) is used with  = 0 . 5 . no. 3, pp. 739–745, 2014. [12] S. Ghapani, J. Mei, W . Ren, and Y . Song, “Fully distrib uted flocking with a moving leader for Lagrange networks with parametric uncertainties, ” Automatica , vol. 67, pp. 67–76, 2016. [13] A. Abdessameud, A. T ayebi, and I. Polushin, “Leader -Follower Synchro- nization of Euler-Lagrange Systems with Time-V arying Leader T rajec- tory and Constrained Discrete-time Communication, ” IEEE Tr ansactions on Automatic Contr ol , vol. Preprint, 2016. [14] W . Ren, “Distributed Leaderless Consensus Algorithms for Networked Euler–Lagrange Systems, ” International J ournal of Contr ol , vol. 82, no. 11, pp. 2137–2149, 2009. [15] E. Nuno, I. Sarras, and L. Basanez, “Consensus in Networks of Nonidentical Euler-Lagrange Systems using P+ d Controllers, ” IEEE T ransactions on Robotics , vol. 29, no. 6, pp. 1503–1508, 2013. [16] Z. Meng, T . Y ang, G. Shi, D. V . Dimarogonas, Y . Hong, and K. H. Johansson, “Set Target Aggregation of Multiple Mechanical Systems, ” in 2014 IEEE 53rd Annual Confer ence on Decision and Control . IEEE, 2014, pp. 6830–6835. [17] J. Mei, W . Ren, and G. Ma, “Distributed Coordinated Tracking W ith a Dynamic Leader for Multiple Euler -Lagrange Systems, ” IEEE T ransac- tions on Automatic Control , v ol. 56, no. 6, pp. 1415–1421, 2011. [18] Y . Zhao, Z. Duan, and G. W en, “Distributed finite-time tracking of multiple Euler–Lagrange systems without velocity measurements, ” In- ternational Journal of Robust and Nonlinear Control , vol. 25, no. 11, pp. 1688–1703, 2015. [19] J. R. Klotz, Z. Kan, J. M. Shea, E. L. P asiliao, and W . E. Dixon, “ Asymp- totic Synchronization of a Leader-Follo wer Network of Uncertain Euler- Lagrange Systems, ” IEEE T ransactions on Control of Network Systems , vol. 2, no. 2, pp. 174–182, June 2015. [20] Z. Feng, G. Hu, W . Ren, W . E. Dixon, and J. Mei, “Distributed Coordination of Multiple Unknown Euler-Lagrange Systems, ” IEEE T ransactions on Control of Network Systems , vol. PP , no. 99, pp. 1– 1, 2016. [21] P . F . Hokayem, D. M. Stipanovi ´ c, and M. W . Spong, “Coordination and Collision A voidance for Lagrangian Systems With Disturbances, ” Applied Mathematics and Computation , vol. 217, no. 3, pp. 1085–1094, 2010. [22] N. Chopra, “Output Synchronization on Strongly Connected Graphs, ” IEEE T ransactions on Automatic Contr ol , vol. 57, no. 11, pp. 2896– 2901, Nov 2012. [23] M. Y e, C. Y u, and B. D. O. Anderson, “Model-Independent Rendezvous of Euler-Lagrange Agents on Directed Networks, ” in Proceedings of IEEE 54th Annual Confer ence on Decision and Control, Osaka, Japan , 2015, pp. 3499–3505. [24] M. Y e, B. D. O. Anderson, and C. Y u, “Distributed model-independent consensus of Euler–Lagrange agents on directed networks, ” Interna- tional Journal of Robust and Nonlinear Control , vol. 27, no. 14, pp. 2428–2450, September 2017. [25] ——, “Model-Independent Trajectory T racking of Euler–Lagrange Agents on Directed Networks, ” in Pr oceedings of IEEE 55th Annual Confer ence on Decision and Contr ol (CDC), Las V egas, USA , 2016, pp. 6921–6927. [26] M. W . Spong, S. Hutchinson, and M. V idyasagar , Robot Modeling and Contr ol . W iley New Y ork, 2006, vol. 3. [27] R. A. Horn and C. R. Johnson, Matrix Analysis . Cambridge Uni versity Press, New Y ork, 2012. [28] H. Zhang, Z. Li, Z. Qu, and F . L. Lewis, “On Constructing L yapunov Functions for Multi-Agent Systems, ” Automatica , vol. 58, pp. 39–42, 2015. [29] Y . Cao, W . Ren, and Z. Meng, “Decentralized Finite-time Sliding Mode Estimators and Their Applications in Decentralized Finite-time Formation Tracking, ” Systems & Control Letters , vol. 59, no. 9, pp. 522–529, 2010. [30] J. Cortes, “Discontinuous Dynamical Systems, ” Contr ol Systems, IEEE , vol. 28, no. 3, pp. 36–73, 2008. [31] M. Y e, B. D. O. Anderson, and C. Y u, “Leader Tracking of Euler-Lagrange Agents on Directed Switching Networks Using A Model-Independent Algorithm, ” 2018, arXi v:1802.00906 [cs.SY]. [Online]. A vailable: https://arxi v .org/abs/1802.00906 [32] H. Khalil, Nonlinear Systems . Prentice Hall, 2002. [33] C. Edwards and S. Spurgeon, Sliding Mode Control: Theory and Applications , ser . Series in Systems and Control. CRC Press, 1998. [34] D. Liberzon, Switching in Systems and Contr ol . Springer Science & Business Media, 2012.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment