Distributed Average Consensus under Quantized Communication via Event-Triggered Mass Summation
We study distributed average consensus problems in multi-agent systems with directed communication links that are subject to quantized information flow. The goal of distributed average consensus is for the nodes, each associated with some initial val…
Authors: Apostolos I. Rikos, Christoforos N. Hadjicostis
Distrib uted A verage Consensus under Quantized Communication via Event-T rigger ed Mass Summation Apostolos I. Rikos and Christoforos N. Hadjicostis Abstract — W e study distributed a verage consensus problems in multi-agent systems with directed communication links that are subject to quantized information flow . The goal of dis- tributed a verage consensus is f or the nodes, each associated with some initial value, to obtain the average (or some value close to the a verage) of these initial values. In this paper , we present and analyze a distributed a veraging algorithm which operates exclusively with quantized values (specifically , the inf ormation stored, processed and exchanged between neighboring agents is subject to deterministic uniform quantization) and relies on e vent-driven updates (e.g., to reduce energy consumption, communication bandwidth, network congestion, and/or proces- sor usage). W e characterize the properties of the proposed distributed av eraging pr otocol on quantized v alues and show that its execution, on any time-in variant and str ongly connected digraph, will allow all agents to reach, in finite time, a common consensus value r epresented as the ratio of two integer that is equal to the exact a verage. W e conclude with examples that illustrate the operation, performance, and potential advantages of the pr oposed algorithm. Index T erms — Quantized a verage consensus, e vent-triggered, distributed algorithms, quantization, digraphs, multi-agent sys- tems. I . I N T RO D U C T I O N In recent years, there has been a growing interest for control and coordination of networks consisting of multiple agents, like groups of sensors [1] or mobile autonomous agents [2]. A problem of particular interest in distributed control is the consensus problem where the objecti ve is to dev elop distributed algorithms that can be used by a group of agents in order to reach agreement to a common decision. The agents start with dif ferent initial values/information and are allo wed to communicate locally via inter -agent informa- tion exchange under some constraints on connectivity . Con- sensus processes play an important role in man y problems, such as leader election [3], motion coordination of multi- vehicle systems [2], [4], and clock synchronization [5]. One special case of the consensus problem is distributed av eraging, where each agent (initially endo wed with a numer - ical value) can send/receiv e information to/from other agents in its neighborhood and update its v alue iterativ ely , so that ev entually , it is able to compute the average of all initial values. A verage consensus is an important problem [4], [6]– [12] and has been studied extensiv ely in settings where each agent processes and transmits real-valued states with infinite precision. The authors are with the Department of Electrical and Com- puter Engineering at the Univ ersity of Cyprus, Nicosia, Cyprus. E- mails: { arikos01,chadjic } @ucy.ac.cy . More recently , researchers hav e also studied the case when network links can only allo w messages of limited length to be transmitted between agents (presumably due to constraints on their capacity), ef fectiv ely extending techniques for av erage consensus towards the direction of quantized consensus. V arious probabilistic strategies have been proposed, allowing the agents in a network to reach quantized consensus with probability one [13]–[18]. Furthermore, in many types of communication networks it is desirable to update values infrequently to av oid consuming valuable network resources. Thus, there is an increasing need for novel event-triggered algorithms for cooperati ve control, which aim at more effi- cient usage of network resources [19]–[21]. In this paper , we present a novel distributed average consensus algorithm that combines the both of the features mentioned abov e. More specifically , the processing, storing, and exchange of information between neighboring agents is “ev ent-driven” and subject to uniform quantization. Follo w- ing [15], [18] we assume that the states are integer-v alued (which comprises a class of quantization effects). W e note that most work dealing with quantization has concentrated on the scenario where the agents have real-v alued states but can transmit only quantized values through limited rate channels (see, e.g., [17], [22]). By contrast, our assumption is also suited to the case where the states are stored in digital memories of finite capacity (as in [15], [18], [23]) and the control actuation of each node is ev ent-based, which enables more efficient use of a vailable resources. The main result of this paper sho ws that the proposed algorithm will allow all agents to reach quantized consensus in finite time by reaching a v alue represented as the ratio of two integer v alues that is equal to the average. I I . P R E L I M I NA R I E S The sets of real, rational, integer and natural numbers are denoted by R , Q , Z and N , respectiv ely . The symbol Z + denotes the set of nonne gativ e inte gers. Consider a network of n ( n ≥ 2 ) agents communicating only with their immediate neighbors. The communication topology can be captured by a directed graph (digraph), called communication digraph . A digraph is defined as G d = ( V , E ) , where V = { v 1 , v 2 , . . . , v n } is the set of nodes and E ⊆ V × V − { ( v j , v j ) | v j ∈ V } is the set of edges (self- edges excluded). A directed edge from node v i to node v j is denoted by m j i , ( v j , v i ) ∈ E , and captures the fact that node v j can recei ve information from node v i (but not the other way around). W e assume that the gi ven digraph G d = ( V , E ) is static (i.e., does not change over time) and str ongly connected (i.e., for each pair of nodes v j , v i ∈ V , v j 6 = v i , there exists a directed path from v i to v j ). The subset of nodes that can directly transmit information to node v j is called the set of in-neighbors of v j and is represented by N − j = { v i ∈ V | ( v j , v i ) ∈ E } , while the subset of nodes that can directly recei ve information from node v j is called the set of out-neighbors of v j and is represented by N + j = { v l ∈ V | ( v l , v j ) ∈ E } . The cardinality of N − j is called the in-de gree of v j and is denoted by D − j (i.e., D − j = |N − j | ), while the cardinality of N + j is called the out- de gree of v j and is denoted by D + j (i.e., D + j = |N + j | ). W e assume that each node is aware of its out-neighbors and can directly (or indirectly 1 ) transmit messages to each out-neighbor; howe ver , it cannot necessarily receive mes- sages from them. In the randomized version of the protocol, each node v j assigns a nonzero pr obability b lj to each of its outgoing edges m lj (including a virtual self-edge), where v l ∈ N + j ∪ { v j } . This probability assignment can be captured by a column stochastic matrix B = [ b lj ] . A very simple choice would be to set b lj = ( 1 1+ D + j , if v l ∈ N + j ∪ { v j } , 0 , otherwise. Each nonzero entry b lj of matrix B represents the probability of node v j transmitting tow ards the out-neighbor v l ∈ N + j through the edge m lj , or performing no transmission 2 . In the deterministic version of the protocol, each node v j also assigns a unique order in the set { 0 , 1 , ..., D + j − 1 } to each of its outgoing edges m lj , where v l ∈ N + j . The order of link ( v l , v j ) for node v j is denoted by P lj (such that { P lj | v l ∈ N + j } = { 0 , 1 , ..., D + j − 1 } ). This unique predetermined order is used during the ex ecution of the proposed distributed algorithm as a way of allowing node v j to transmit messages to its out-neighbors in a r ound-r obin 3 fashion. I I I . P RO B L E M F O R M U L A T I O N Consider a strongly connected digraph G d = ( V , E ) , where each node v j ∈ V has an initial (i.e., for k = 0 ) quantized value y j [0] (for simplicity , we take y j [0] ∈ Z ). In this paper , we dev elop a distributed algorithm that allows nodes (while processing and transmitting quantized information via av ailable communication links between nodes) to e ventually obtain, after a finite number of steps, a quantized fraction q s which is equal to the av erage q of the initial values of the 1 Indirect transmission could in volve broadcasting a message to all out- neighbors while including in the message header the ID of the out-neighbor it is intended for. 2 From the definition of B = [ b lj ] we have that b j j = 1 1+ D + j , ∀ v j ∈ V . This represents the probability that node v j will not perform a transmission to any of its out-neighbors v l ∈ N + j (i.e., it will transmit to itself). 3 When executing the deterministic protocol, each node v j transmits to its out-neighbors by follo wing a predetermined order. The ne xt time it needs to transmit to an out-neighbor, it will continue from the outgoing edge it stopped the previous time and cycle through the edges in a round-robin fashion according to the predetermined ordering. nodes, where q = P n l =1 y l [0] n . (1) Remark 1: Follo wing [15], [18] we assume that the state of each node is inte ger valued. This abstraction subsumes a class of quantization effects (e.g., uniform quantization). The quantized a verage q s is defined as the ceiling q s = d q e or the floor q s = b q c of the true a verage q of the initial values. Let S , 1 T y [0] , where 1 = [1 ... 1] T is the vector of ones, and let y [0] = [ y 1 [0] ... y n [0]] T be the vector of the quantized initial values. W e can write S uniquely as S = nL + R where L and R are both integers and 0 ≤ R < n . Thus, we hav e that either L or L + 1 may be viewed as an integer approximation of the average of the initial v alues S/n (which may not be integer in general). The algorithm we will develop will be iterati ve. With respect to quantization of information flow , we hav e that at time step k ∈ Z + (where Z + is the set of nonnegativ e integers), each node v j ∈ V maintains the state v ariables y s j , z s j , q s j , where y s j ∈ Z , z s j ∈ N and q s j (where q s j = y s j z s j ), and the mass variables y j , z j where y j ∈ Z and z j ∈ N 0 . The aggregate states are denoted by y s [ k ] = [ y s 1 [ k ] ... y s n [ k ]] T ∈ Z n , z s [ k ] = [ z s 1 [ k ] ... z s n [ k ]] T ∈ N n , q s [ k ] = [ q s 1 [ k ] ... q s n [ k ]] T ∈ Q n and y [ k ] = [ y 1 [ k ] ... y n [ k ]] T ∈ Z n , z [ k ] = [ z 1 [ k ] ... z n [ k ]] T ∈ N n respectiv ely . Follo wing the execution of the proposed distrib uted algo- rithm, we argue that ∃ k 0 so that for ev ery k ≥ k 0 we hav e y s j [ k ] = P n l =1 y l [0] α and z s j [ k ] = n α , (2) where α ∈ N . This means that q s j [ k ] = ( P n l =1 y l [0]) /α n/α = q , (3) for every v j ∈ V (i.e., for k ≥ k 0 ev ery node v j has calculated q as the ratio of two integer v alues). I V . R A N D O M I Z E D Q UA N T I Z E D A V E R A G I N G A L G O R I T H M In this section we propose a distributed information ex- change process in which the nodes transmit and receiv e quantized messages so that they reach quantized average consensus on their initial values after a finite number of steps. The operation of the proposed distributed algorithm is sum- marized below . Initialization: Each node v j selects a set of probabilities { b lj | v l ∈ N + j ∪ { v j }} such that 0 < b lj < 1 and P v l ∈N + j ∪{ v j } b lj = 1 (see Section II). Each value b lj , represents the probability for node v j to transmit to wards out- neighbor v l ∈ N + j (or perform no transmission), at any gi ven time step (independently between time steps). Each node has some initial value y j [0] , and also sets its state variables, for time step k = 0 , as z j [0] = 1 , z s j [0] = 1 and y s j [0] = y j [0] , which means that q s j [0] = y j [0] / 1 . The iteration inv olves the following steps: Step 1. T ransmitting: According to the nonzero proba- bilities b lj , assigned by node v j during the initialization step, it either transmits z j [ k ] and y j [ k ] towards out-neighbor v l ∈ N + j or performs no transmission. If it performs a transmission tow ards an out-neighbor v l ∈ N + j , it sets y j [ k ] = 0 and z j [ k ] = 0 . Step 2. Receiving: Each node v j receiv es messages y i [ k ] and z i [ k ] from its in-neighbors v i ∈ N − j , and it sums them along with its stored messages y j [ k ] and z j [ k ] as y j [ k + 1] = X v i ∈N − j ∪{ v j } w j i [ k ] y i [ k ] , and z j [ k + 1] = X v i ∈N − j ∪{ v j } w j i [ k ] z i [ k ] , where w j i [ k ] = 0 if no message is received from in-neighbor v i ∈ N − j ; otherwise w j i [ k ] = 1 . Step 3. Processing: If z j [ k + 1] ≥ z s j [ k ] , node v j sets z s j [ k + 1] = z j [ k + 1] , y s j [ k + 1] = y j [ k + 1] and q s j [ k + 1] = y s j [ k + 1] z s j [ k + 1] . Then, k is set to k + 1 and the iteration repeats (it goes back to Step 1). The probabilistic quantized mass transfer process is de- tailed as Algorithm 1 below (for the case when b lj = 1 / (1 + D + j ) for v l ∈ N + j ∪ { v j } and b lj = 0 otherwise). Example 1: Consider the strongly connected digraph G d = ( V , E ) shown in Fig. 1, with V = { v 1 , v 2 , v 3 , v 4 } and E = { m 21 , m 31 , m 42 , m 13 , m 23 , m 34 } , where each node has initial quantized values y 1 [0] = 5 , y 2 [0] = 3 , y 3 [0] = 7 , and y 4 [0] = 2 respecti vely . The av erage q of the initial v alues of the nodes, is equal to q = 17 4 . v 2 v 1 v 4 v 3 Fig. 1. Example of digraph for probabilistic quantized averaging. Each node v j ∈ V follows the Initialization steps ( 1 − 2 ) in Algorithm 1, assigning to each of its outgoing edges v l ∈ N + j ∪ { v j } a nonzero probability value b lj equal to b lj = 1 1+ D + j . The assigned values can be seen in the following matrix B = 1 3 0 1 3 0 1 3 1 2 1 3 0 1 3 0 1 3 1 2 0 1 2 0 1 2 , while the initial mass and state v ariables are shown in T able I. For the execution of the proposed algorithm, suppose that at time step k = 0 , nodes v 1 , v 3 and v 4 transmit to nodes v 2 , v 1 and v 3 , respectiv ely , whereas node v 2 , performs no Algorithm 1 Probabilistic Quantized A verage Consensus Input 1) A strongly connected digraph G d = ( V , E ) with n = |V | nodes and m = |E | edges. 2) For ev ery v j we hav e y j [0] ∈ Z . Initialization Every node v j ∈ V : 1) Assigns a nonzero probability b lj to each of its outgoing edges m lj , where v l ∈ N + j , as follows b lj = ( 1 1+ D + j , if l = j or v l ∈ N + j , 0 , if l 6 = j and v l / ∈ N + j . 2) Sets z j [0] = 1 , z s j [0] = 1 and y s j [0] = y j [0] (which means that q s j [0] = y j [0] / 1 ). Iteration For k = 0 , 1 , 2 , . . . , each node v j ∈ V does the following: 1) It either transmits y j [ k ] and z j [ k ] towards a randomly chosen out-neighbor v l ∈ N + j (according to the nonzero probability b lj ) or performs no transmission (according to the nonzero probability b j j ). If it transmitted tow ards an out- neighbor , it sets y j [ k ] = 0 and z j [ k ] = 0 . 2) It receives y i [ k ] and z i [ k ] from its in-neighbors v i ∈ N − j and sets y j [ k + 1] = X v i ∈N − j ∪{ v j } w j i [ k ] y i [ k ] , and z j [ k + 1] = X v i ∈N − j ∪{ v j } w j i [ k ] z i [ k ] , where w j i [ k ] = 1 if node v j receiv es values from node v i (otherwise w j i [ k ] = 0 ). 3) If the following condition holds, z j [ k + 1] ≥ z s j [ k ] , (4) it sets z s j [ k + 1] = z j [ k + 1] , y s j [ k + 1] = y j [ k + 1] , which means that q s j [ k + 1] = y s j [ k +1] z s j [ k +1] . 4) It repeats (increases k to k + 1 and goes back to Step 1). transmission. The mass and state variables for k = 1 are shown in T able II. T ABLE I I N IT I A L M A S S A N D S T A T E V A R I AB L E S F O R F I G . 1 Nodes Mass and State V ariables for k = 0 v j y j [0] z j [0] y s j [0] z s j [0] q s j [0] v 1 5 1 5 1 5 / 1 v 2 3 1 3 1 3 / 1 v 3 7 1 7 1 7 / 1 v 4 2 1 2 1 2 / 1 T ABLE II M A SS A N D S TA T E V A R I A BL E S F O R F I G . 1 F O R k = 1 Nodes Mass and State V ariables for k = 1 v j y j [1] z j [1] y s j [1] z s j [1] q s j [1] v 1 7 1 7 1 7 / 1 v 2 8 2 8 2 8 / 2 v 3 2 1 2 1 2 / 1 v 4 0 0 2 1 2 / 1 It is important to notice here that nodes v 1 and v 3 hav e mass v ariables y 1 [1] = y 3 [0] = 7 , z 1 [1] = z 3 [0] = 1 and y 3 [1] = y 4 [0] = 2 , z 3 [1] = z 4 [0] = 1 (and update their state variables), while node v 2 has mass variables y 2 [1] = y 1 [0] + y 2 [0] = 8 , z 2 [1] = z 1 [0] + z 2 [0] = 2 (also updating its state v ariables). In the latter case we can say that the mass variables of nodes v 1 and v 2 will “merge”. Suppose now that at time step k = 1 , nodes v 1 and v 2 transmit to nodes v 3 and v 4 . Node v 3 , does not perform a transmission while node v 4 has no mass to transmit. The mass and state variables for k = 2 are sho wn in T able III. T ABLE III M A SS A N D S TA T E V A R I A BL E S F O R F I G . 1 F O R k = 2 Nodes Mass and State V ariables for k = 2 v j y j [2] z j [2] y s j [2] z s j [2] q s j [2] v 1 0 0 7 1 7 / 1 v 2 0 0 8 2 8 / 2 v 3 9 2 9 2 9 / 2 v 4 8 2 8 2 8 / 2 Then, suppose that at time step k = 2 , node v 4 transmits to node v 3 , while node v 3 , does not perform a transmission (nodes v 1 and v 2 hav e no mass to transmit). The mass and state variables for k = 3 are sho wn in T able IV. W e can see that, at time step k = 3 all the initial mass variables are “merged” in node v 3 (i.e., y 3 [3] = y 1 [0] + y 2 [0] + y 3 [0] + y 4 [0] and z 3 [3] = z 1 [0] + z 2 [0] + z 3 [0] + z 4 [0] ). Now suppose that during time steps k = 3 , 4 , 5 the following transmissions tak e place: “ v 3 transmits to v 1 ”, “ v 1 transmits to v 2 ”, “ v 2 transmits to v 4 ”. The mass and state variables for k = 5 are sho wn in T able V. T ABLE IV M A SS A N D S TA T E V A R I A BL E S F O R F I G . 1 F O R k = 3 Nodes Mass and State V ariables for k = 3 v j y j [3] z j [3] y s j [3] z s j [3] q s j [3] v 1 0 0 7 1 7 / 1 v 2 0 0 8 2 8 / 2 v 3 17 4 17 4 17 / 4 v 4 0 0 8 2 8 / 2 T ABLE V M A SS A N D S TA T E V A R I A BL E S F O R F I G . 1 F O R k = 5 Nodes Mass and State V ariables for k = 3 v j y j [5] z j [5] y s j [5] z s j [5] q s j [5] v 1 0 0 17 4 17 / 4 v 2 0 0 17 4 17 / 4 v 3 0 0 17 4 17 / 4 v 4 17 4 17 4 17 / 4 From T able V, we can see that for k ≥ 5 it holds that q s j [ k ] = q = 17 4 , for every v j ∈ V , which means that e very node v j will ev entually obtain a quantized fraction q s j , which is equal to the av erage q of the initial v alues of the nodes. Remark 2: From the previous example, it is important to notice that, once the initial mass variables “mer ge” at time step k = 3 , they r emain “merged” during the operation of Algorithm 1 for every time step k ≥ 3 . W e are now ready to prove that during the operation of Algorithm 1 each agent obtains two integer values y s and z s , the ratio of which is equal to the average q of the initial values of the nodes. Proposition 1: Consider a strongly connected digraph G d = ( V , E ) with n = |V | nodes and m = |E | edges, and z j [0] = 1 and y j [0] ∈ Z for ev ery node v j ∈ V at time step k = 0 . Suppose that each node v j ∈ V follo ws the Initialization and Iteration steps as described in Algorithm 1. Let V + [ k ] ⊆ V be the set of nodes v j with positiv e mass variable z j [ k ] at iteration k (i.e., V + [ k ] = { v j ∈ V | z j [ k ] > 0 } ). During the e xecution of Algorithm 1, for every k ≥ 0 , we hav e that 1 ≤ |V + [ k + 1] | ≤ |V + [ k ] | ≤ n. Pr oof: During the Iteration Steps 1 and 2 of Algo- rithm 1, at time step k , we hav e that each node v j ∈ V transmits z j [ k ] and y j [ k ] tow ards a randomly chosen out- neighbor v l ∈ N + j , or performs no transmission. Then, it receiv es y i [ k ] and z i [ k ] from its in-neighbors v i ∈ N − j and sets y j [ k + 1] = P v i ∈N − j ∪{ v j } w j i [ k ] y i [ k ] , and z j [ k + 1] = P v i ∈N − j ∪{ v j } w j i [ k ] z i [ k ] . The Iteration Steps 1 and 2 of Algorithm 1, during time step k , can be expressed according to the following equations y [ k + 1] = W [ k ] y [ k ] , (5) and z [ k + 1] = W [ k ] z [ k ] , (6) where y [ k ] = [ y 1 [ k ] ... y n [ k ]] T , z [ k ] = [ z 1 [ k ] ... z n [ k ]] T and W [ k ] = [ w lj [ k ]] is an n × n binary (i.e., for e very k , w lj [ k ] is either equal to 1 or 0 , for e very ( v l , v j ) ∈ E ), column stochastic matrix. Focusing on (6), during time step k 0 , let us assume without loss of generality that z [ k 0 ] = [ z 1 [ k 0 ] . . . z p 0 [ k 0 ] 0 . . . 0] T , which means that we hav e z i [ k 0 ] > 0 , ∀ v i ∈ { v 1 , · · · , v p 0 } and z l [ k 0 ] = 0 , ∀ v l ∈ V − { v 1 , · · · , v p 0 } . W e can assume without loss of generality that the nodes with zero mass do not transmit (transmit to themselves). Let us consider the scenario where P v i ∈N − j ∪{ v j } w j i [ k 0 ] = 1 , ∀ v j ∈ V (i.e., for every row of W [ k 0 ] exactly one element is equal to 1 and all the other are equal to zero). This means that ev ery node v j will recei ve e xactly one mass v ariable z i [ k 0 ] (the bottom n − p 0 nodes receive their own mass). Since, at time step k 0 , we have p 0 nodes with nonzero mass variables, we have that at time step k 0 + 1 , exactly p 0 nodes hav e a nonzero mass variable. As a result, for this scenario, we hav e |V + [ k 0 +1] | = |V + [ k 0 ] | . W ithout loss of generality , let us consider the scenario where w j i 1 [ k 0 ] = 1 , w j i 2 [ k 0 ] = 1 (where v i 1 , v i 2 ∈ N − j ∪ { v j } ) and w j i [ k 0 ] = 0 , ∀ v i ∈ {N − j ∪ { v j }} − { v i 1 , v i 2 } (i.e., the j th row of matrix W [ k 0 ] has exactly 2 elements equal to 1 and all the other equal to zero). Also, let us assume that P v i ∈N − l ∪{ v l } w li [ k 0 ] ≤ 1 , ∀ v l ∈ V − { v j } (i.e., for ev ery row of W [ k 0 ] (except row j ) at most one element is equal to 1 and all the other are equal to zero). The above assumptions, regarding matrix W , mean that, during time step k 0 , only node v j will recei ve two mass variables (from nodes v i 1 and v i 2 ) and all the other nodes will recei ve at most one mass variable. W e hav e that z j [ k 0 + 1] = z i 1 [ k 0 ] + z i 2 [ k 0 ] and z l [ k 0 + 1] = z i [ k 0 ] , for v l ∈ V − { v j } and some v i ∈ V − { v i 1 , v i 2 } (i.e., node v j receiv ed two nonzero mass variables while all the other nodes received at most one nonzero mass variable, also including its own mass variable). Since, at time step k 0 , we had p 0 nodes with nonzero mass v ariables and at time step k 0 + 1 node v j receiv ed (and summed) two nonzero mass variables, while all the other nodes received at most one nonzero mass variable, this means that, at time step k 0 + 1 , we have p 0 − 1 nodes with nonzero mass v ariables. This means that |V + [ k 0 + 1] | < |V + [ k 0 ] | . By extending the abo ve analysis for scenarios where each row of W [ k ] , at different time steps k , has multiple elements equal to 1 (but W [ k ] remains column stochastic) we can see that the number of nodes v j with nonzero mass v ariable z j [ k ] > 0 is non-increasing and thus we have |V + [ k + 1] | ≤ |V + [ k ] | , ∀ k ∈ N . Proposition 2: Consider a strongly connected digraph G d = ( V , E ) with n = |V | nodes and m = |E | edges and z j [0] = 1 and y j [0] ∈ Z for ev ery node v j ∈ V at time step k = 0 . Suppose that each node v j ∈ V follo ws the Initialization and Iteration steps as described in Algorithm 1. W ith probability one, we can find k 0 ∈ N , so that for ev ery k ≥ k 0 we hav e y s j [ k ] = n X l =1 y l [0] and z s j [ k ] = n, which means that q s j [ k ] = P n l =1 y l [0] n , for every v j ∈ V (i.e., for k ≥ k 0 ev ery node v j has calculated q as the ratio of two integer v alues). Pr oof: From Proposition 1 we ha ve that |V + [ k + 1] | ≤ |V + [ k ] | (i.e., the number of nonzero mass variables is non- increasing). W e will sho w that the number of nonzero mass variables is decreasing after a finite number of steps, until, at some k 0 ∈ N , we have y j [ k 0 ] = P n l =1 y l [0] and z j [ k 0 ] = n , for some node v j ∈ V , and y i [ k 0 ] = 0 and z i [ k 0 ] = 0 , for each v i ∈ V − { v j } ). In this scenario, (2) and (3) hold for each node v j for the case where α = 1 . The Iteration Steps 1 and 2 of Algorithm 1, during time step k , can be expressed according to (5) and (6), where y [ k ] = [ y 1 [ k ] ... y n [ k ]] T , z [ k ] = [ z 1 [ k ] ... z n [ k ]] T , and W [ k ] = [ w lj [ k ]] is an n × n binary , column stochastic matrix. Focusing on (6), suppose that, during time step k 0 , we have z i [ k 0 ] > 0 , z j [ k 0 ] > 0 and w li [ k 0 ] = 1 , w lj [ k 0 ] = 1 . This scenario will occur with probability equal to (1 + D + i ) − 1 (1 + D + j ) − 1 (i.e., as long as nodes v i and v j both transmit towards node v l ). Furthermore, we have that the mass variables of v i and v j will not “merge” in v l with probability 1 − (1 + D + i ) − 1 (1 + D + j ) − 1 . By extending the above analysis we hav e that, ev ery n time steps, the probability that two nonzero mass variables “merge” is positiv e and lower bounded by Q n j =1 (1 + D + j ) − 1 2 (i.e., P merge ≥ Q n j =1 (1 + D + j ) − 1 2 ). Thus, from the execution of Algorithm 1, we hav e that the probability that all nonzero mass v ariables “merge” will be arbitrarily close to 1 for a suf ficiently lar ge k . This means that ∃ k 0 ∈ N for which y j [ k 0 ] = P n l =1 y l [0] , and z j [ k 0 ] = n , for some node v j ∈ V , and y i [ k 0 ] = 0 , and z i [ k 0 ] = 0 , for each v i ∈ V − { v j } . Once this “merging” of all nonzero mass variables occurs, we have that the nonzero mass v ariables of node v j will update the state v ariables of ev ery node v i ∈ V (because it e ventually will be forward to all other nodes) which means that ∃ k 1 ∈ N (where k 1 > k 0 ) for which y s i [ k 1 ] = P n l =1 y l [0] and z s i [ k 1 ] = n , for ev ery node v i ∈ V . This means that after a finite number of steps, (2) and (3) will hold for e very node v j ∈ V for the case where α = 1 . Remark 3: It is interesting to note that during the op- eration of Algorithm 1, after a finite number of steps k 0 , the state variables of each node v j ∈ V , become equal to y s j [ k ] = P n l =1 y l [0] , z s j [ k ] = n , so that q s j [ k ] = P n l =1 y l [0] n , for k ≥ k 0 . This means that (2) and (3) will hold for each node v j for the case where α = 1 . Ho wev er, this does not necessarily hold for the distributed algorithm presented in the following section. Remark 4: It is also worth pointing out that during the operation of Algorithm 1, once (2) and (3) hold for each node v j for the case where α = 1 , then each node also obtains knowledge regarding the total number of nodes in the digraph, since z s j [ k ] = n , ∀ v j ∈ V , which may be useful for determining the number of agents in the network. V . E V E N T - T R I G G E R E D Q UA N T I Z E D A V E R AG I N G A L G O R I T H M In this section we propose a distributed algorithm in which the nodes receive quantized messages and perform transmissions according to a set of deterministic conditions , so that they reach quantized average consensus on their initial values. The operation of the proposed distributed algorithm is summarized below . Initialization: Each node v j assigns to each of its outgoing edges v l ∈ N + j a unique order P lj in the set { 0 , 1 , ..., D + j − 1 } , which will be used to transmit messages to its out- neighbors in a round-robin fashion. Node v j has initial value y j [0] and sets its state v ariables, for time step k = 0 , as z j [0] = 1 , z s j [0] = 1 and y s j [0] = y j [0] , which means that q s j [0] = y j [0] / 1 . Then, it chooses an out-neighbor v l ∈ N + j (according to the predetermined order P lj ) and transmits z j [0] and y j [0] to that particular neighbor . Then, it sets y j [0] = 0 and z j [0] = 0 (since performed a transmission). The iteration inv olves the following steps: Step 1. Receiving: Each node v j receiv es messages y i [ k ] and z i [ k ] from its in-neighbors v i ∈ N − j and sums them along with its stored messages y j [ k ] and z j [ k ] to obtain y j [ k + 1] = X v i ∈N − j ∪{ v j } w j i [ k ] y i [ k ] , and z j [ k + 1] = X v i ∈N − j ∪{ v j } w j i [ k ] z i [ k ] , where w j i [ k ] = 0 if no message is received from in-neighbor v i ∈ N − j ; otherwise w j i [ k ] = 1 . Step 2. Event-T riggered Conditions: Node v j checks the following conditions: 1) It checks whether z j [ k + 1] is greater than z s j [ k ] , 2) If z j [ k + 1] is equal to z s j [ k ] , it checks whether y j [ k + 1] is greater than (or equal to) y s j [ k ] . If one of the abov e two conditions holds, it sets y s j [ k + 1] = y j [ k + 1] , z s j [ k + 1] = z j [ k + 1] and q s j [ k + 1] = y s j [ k +1] z s j [ k +1] . Step 3. T ransmitting: If the ev ent-trigger conditions above do not hold, no transmission is performed. Otherwise, if the ev ent-trigger conditions abov e hold, node v j chooses an out- neighbor v l ∈ N + j according to the order P lj (in a round- robin f ashion) and transmits z j [ k + 1] and y j [ k + 1] . Then, since it transmitted its stored mass, it sets y j [ k + 1] = 0 and z j [ k + 1] = 0 . Then, k is set to k + 1 and the iteration repeats (it goes back to Step 1). This ev ent-based quantized mass transfer process is sum- marized as Algorithm 2, where each node v j at time step k maintains mass variables y j [ k ] and z j [ k ] and state v ariables y s j [ k ] and z s j [ k ] (and q s j [ k ] = y s j [ k ] /z s j [ k ] ). Note that the ev ent trigger conditions effecti vely imply that no transmission is performed if z j [ k ] = 0 . W e now analyze the functionality of the distributed al- gorithm and we prove that it allows all agents to reach quantized av erage consensus after a finite number of steps. Depending on the graph structure and the initial mass v ari- ables of each node, we hav e the follo wing two possible scenarios: A. Full Mass Summation (i.e., there exists k 0 ∈ N where we hav e y j [ k 0 ] = P n l =1 y l [0] and z j [ k 0 ] = n , for some node v j ∈ V , and y i [ k 0 ] = 0 and z i [ k 0 ] = 0 , for each v i ∈ V − { v j } ). In this scenario (2) and (3) hold for each node v j for the case where α = 1 . B. Partial Mass Summation (i.e., there exists k 0 ∈ N so that for every k ≥ k 0 there exists a set V p [ k ] ⊆ V in which we hav e y j [ k ] = y i [ k ] and z j [ k ] = z i [ k ] , ∀ v j , v i ∈ V p [ k ] and y l [ k ] = 0 and z l [ k ] = 0 , for each v l ∈ V − V p [ k ] ). In this scenario (2) and (3) hold for each node v j for the case where α = |V p [ k ] | . An example regarding the scenario of “P artial Mass Sum- mation” is given belo w . Algorithm 2 Deterministic Quantized A verage Consensus Input 1) A strongly connected digraph G d = ( V , E ) with n = |V | nodes and m = |E | edges. 2) For ev ery v j we hav e y j [0] ∈ Z . Initialization Every node v j ∈ V : 1) Assigns to each of its outgoing edges v l ∈ N + j a unique or der P lj in the set { 0 , 1 , ..., D + j − 1 } . 2) Sets z j [0] = 1 , z s j [0] = 1 and y s j [0] = y j [0] (which means that q s j [0] = y j [0] / 1 ). 3) Chooses an out-neighbor v l ∈ N + j according to the predetermined order P lj (i.e., it chooses v l ∈ N + j such that P lj = 0 ) and transmits z j [0] and y j [0] to this out-neighbor . Then, it sets y j [0] = 0 and z j [0] = 0 . Iteration For k = 0 , 1 , 2 , . . . , each node v j ∈ V does the following: 1) It receives y i [ k ] and z i [ k ] from its in-neighbors v i ∈ N − j and sets y j [ k + 1] = X v i ∈N − j ∪{ v j } w j i [ k ] y i [ k ] , and z j [ k + 1] = X v i ∈N − j ∪{ v j } w j i [ k ] z i [ k ] , where w j i [ k ] = 0 if no message is received (otherwise w j i [ k ] = 1 ). 2) Event triggered conditions: If one of the follo wing two conditions hold, node v j performs Steps 3 and 4 below , otherwise it skips Steps 3 and 4 . Condition 1 : z j [ k + 1] > z s j [ k ] . Condition 2 : z j [ k + 1] = z s j [ k ] and y j [ k + 1] ≥ y s j [ k ] . 3) It sets z s j [ k + 1] = z j [ k + 1] and y s j [ k + 1] = y j [ k + 1] which implies that q s j [ k + 1] = y s j [ k + 1] z s j [ k + 1] . 4) It chooses an out-neighbor v l ∈ N + j according to the order P lj (in a round-robin fashion) and transmits z j [ k + 1] and y j [ k + 1] . Then it sets y j [ k + 1] = 0 and z j [ k + 1] = 0 . 5) It repeats (increases k to k + 1 and goes back to Step 1). Example 2: Consider a strongly connected digraph G d = ( V , E ) , shown in Fig. 2, with V = { v 1 , v 2 , v 3 , v 4 } and E = { m 21 , m 32 , m 43 , m 14 } where each node has an initial quantized value y 1 [0] = 9 , y 2 [0] = 3 , y 3 [0] = 9 and y 4 [0] = 3 respectiv ely . W e have that the average of the initial values of the nodes, is equal to q = 24 4 . At time step k = 0 the initial mass and state v ariables for nodes v 1 , v 2 , v 3 , v 4 are shown in T able VI. T ABLE VI I N IT I A L M A S S A N D S T A T E V A R I AB L E S F O R F I G . 2 v 3 v 2 v 1 v 4 Fig. 2. Example of digraph for partial mass summation. Nodes Mass and State V ariables for k = 0 v j y j [0] z j [0] y s j [0] z s j [0] q s j [0] v 1 9 1 9 1 9 / 1 v 2 3 1 3 1 3 / 1 v 3 9 1 9 1 9 / 1 v 4 3 1 3 1 3 / 1 Then, during time step k = 0 , ev ery node v j will transmit its mass variables y j [0] and z j [0] (since the event-triggered conditions hold for ev ery node). The mass and state variables of ev ery node at k = 1 are sho wn in T able VII. It is important to notice here that, for time step k = 1 , nodes v 1 and v 3 hav e mass variables equal to y 1 [1] = 3 , z 1 [1] = 1 and y 3 [1] = 3 , z 3 [1] = 1 but the corresponding state variables are equal to y s 1 [1] = 9 , z s 1 [1] = 1 and y s 3 [1] = 9 , z s 3 [1] = 1 . This means that at time step k = 1 , the event- triggered conditions do not hold for nodes v 1 and v 3 ; thus, these nodes will not transmit their mass variables (i.e., they will not execute Steps 3 and 4 of Algorithm 2). The mass and state variables of e very node at k = 2 are shown in T able VIII. T ABLE VII M A SS A N D S TA T E V A R I A BL E S F O R F I G . 2 F O R k = 1 Nodes Mass and State V ariables for k = 1 v j y j [1] z j [1] y s j [1] z s j [1] q s j [1] v 1 3 1 9 1 9 / 1 v 2 9 1 9 1 9 / 1 v 3 3 1 9 1 9 / 1 v 4 9 1 9 1 9 / 1 T ABLE VIII M A SS A N D S TA T E V A R I A BL E S F O R F I G . 2 F O R k = 2 Nodes Mass and State V ariables for k = 2 v j y j [2] z j [2] y s j [2] z s j [2] q s j [2] v 1 12 2 12 2 12 / 2 v 2 0 0 9 1 9 / 1 v 3 12 2 12 2 12 / 2 v 4 0 0 9 1 9 / 1 During time step k = 2 we can see that the event-triggered conditions hold for nodes v 1 and v 3 which means that they will transmit their mass variables to wards nodes v 2 and v 4 respectiv ely . The mass and state v ariables of ev ery node for k = 3 are sho wn in T able IX. T ABLE IX M A SS A N D S TA T E V A R I A BL E S F O R F I G . 2 F O R k = 3 Nodes Mass and State V ariables for k = 3 v j y j [3] z j [3] y s j [3] z s j [3] q s j [3] v 1 0 0 12 2 12 / 2 v 2 12 2 12 2 12 / 2 v 3 0 0 12 2 12 / 2 v 4 12 2 12 2 12 / 2 Follo wing the algorithm operation we have that, for k = 3 , the e vent-trigger conditions hold for nodes v 2 and v 4 which means that they will transmit their masses to nodes v 1 and v 3 respectiv ely . As a result we ha ve, for k = 4 , that the mass variables for nodes v 1 and v 3 are y 1 [4] = y 4 [3] = 12 , z 1 [4] = z 4 [3] = 2 and y 3 [4] = y 2 [3] = 12 , z 3 [4] = z 2 [3] = 2 respectively . Then, during time step k = 4 , we hav e that the e vent-triggered conditions hold for nodes v 1 and v 3 which means that they will transmit their mass v ariables to nodes v 1 and v 3 . W e can easily notice that, during the ex ecution of Algorithm 2 for k ≥ 3 , we have V p [ k ] = V p [ k + 2] (where V p [3] = { v 2 , v 4 } and V p [4] = { v 1 , v 3 } ), which means that the exchange of mass variables between the nodes will follo w a periodic beha vior and the mass variables will nev er “merge” in one node (i.e., @ k 0 for which y j [ k 0 ] = P n l =1 y l [0] and z j [ k 0 ] = n , for some node v j ∈ V , and y i [ k 0 ] = 0 and z i [ k 0 ] = 0 , for each v i ∈ V − { v j } ). As a result, from T able IX, we can see that for k ≥ 3 it holds that q s j [ k ] = q = 24 /α 4 /α , for e very v j ∈ V , for α = |V p [ k ] | = 2 . This means that, after a finite number of steps, e very node v j will obtain a quantized fraction q s j which is equal to the av erage q of the initial values of the nodes. Remark 5: Note that the periodic behavior in the above graph is not only a function of the graph structure but also of the initial conditions. Also note that, in general, the priorities will also play a role because they determine the order in which nodes transmit to their out-neighbors (in the example, priorities do not come into play because each node has exactly one out-neighbor). Proposition 3: Consider a strongly connected digraph G d = ( V , E ) with n = |V | nodes and m = |E | edges. The ex ecution of Algorithm 2 will allow each node v j ∈ V to reach quantized average consensus after a finite number of steps, bounded by n 5 . V I . S I M U L A T I O N R E S U L T S In this section, we present simulation results and comparisons. Specifically , we present simulation results of the proposed distributed algorithms for the di- graph G d = ( V , E ) (borro wed from [24]), sho wn in Fig. 3, with V = { v 1 , v 2 , v 3 , v 4 , v 5 , v 6 , v 7 } and E = { m 21 , m 51 , m 12 , m 52 , m 13 , m 53 , m 24 , m 54 , m 65 , m 75 , m 36 , m 47 , m 67 } , where each node has initial quantized val- ues y 1 [0] = 5 , y 2 [0] = 4 , y 3 [0] = 8 , y 4 [0] = 3 , y 5 [0] = 5 , y 6 [0] = 2 , and y 7 [0] = 7 , respecti vely . The average q of the initial values of the nodes, is equal to q = 34 7 . In Figure 4 we plot the state v ariable q s j [ k ] of ev ery node v j ∈ V as a function of the number of iterations k for v 6 v 7 v 3 v 5 v 4 v 1 v 2 Fig. 3. Example of digraph for comparison of Algorithms 1 and 2. the digraph shown in Fig. 3. The plot demonstrates that the proposed distributed algorithms are able to achieve a common quantized consensus value to the av erage of the initial states after a finite number of iterations. 0 5 10 15 20 25 30 35 40 45 50 Number of Iterations 2 4 6 8 Node State Variables q s Probabilistic Quantized Average Consensus for Digraph of Fig. 3 0 5 10 15 20 25 30 35 40 45 50 Number of Iterations 2 4 6 8 Node State Variables q s Deterministic Quantized Average Consensus for Digraph of Fig. 3 Fig. 4. Comparison between Algorithm 1 and Algorithm 2 for the digraph shown in Fig. 3. T op figure: Node state variables plotted against the number of iterations for Algorithm 1. Bottom figure: Node state variables plotted against the number of iterations for Algorithm 2. V I I . C O N C L U S I O N S W e have considered the quantized av erage consensus problem and presented one randomized and one deterministic distributed av eraging algorithm in which the processing, storing and exchange of information between neighboring agents is subject to uniform quantization. W e analyzed the operation of the proposed algorithms and established that they will reach quantized consensus after a finite number of iterations. In the future we plan to in vestigate the dependence of the graph structure with full and partial mass summation of the initial values. Furthermore, we plan to extend the operation of the proposed algorithm to more realistic cases, such as transmission delays over the communication links and the presence of unreliable links over the communication network. R E F E R E N C E S [1] L. Xiao, S. Boyd, and S. Lall, “ A scheme for robust distributed sensor fusion based on average consensus, ” Proceedings of the International Symposium on Information Processing in Sensor Networks , pp. 63–70, April 2005. [2] R. Olfati-Saber and R. Murray , “Consensus problems in networks of agents with switching topology and time-delays, ” IEEE T ransactions on Automatic Control , vol. 49, no. 9, pp. 1520–1533, September 2004. [3] N. L ynch, Distributed Algorithms . San Mateo: CA: Morgan Kauf- mann Publishers, 1996. [4] V . D. Blondel, J. M. Hendrickx, A. Olshe vsky , and J. N. Tsitsiklis, “Con ver gence in multiagent coordination, consensus, and flocking, ” Pr oceedings of the IEEE Confer ence on Decision and Control , pp. 2996–3000, 2005. [5] L. Schenato and G. Gamba, “ A distributed consensus protocol for clock synchronization in wireless sensor network, ” Proceedings of the IEEE Conference on Decision and Contr ol , pp. 2289–2294, 2007. [6] C. N. Hadjicostis, A. D. Dom ´ ınguez-Garc ´ ıa, and T . Charalambous, “Distributed averaging and balancing in network systems, with ap- plications to coordination and control, ” F oundations and T r ends R in Systems and Control , vol. 5, no. 3–4, 2018. [7] S. Sundaram and C. N. Hadjicostis, “Distributed function calculation and consensus using linear iterative strategies, ” IEEE Journal on Selected Ar eas in Communications , vol. 26, no. 4, pp. 650–660, May 2008. [8] T . Charalambous, Y . Y uan, T . Y ang, W . P an, C. N. Hadjicostis, and M. Johansson, “Decentralised minimum-time average consensus in digraphs, ” Pr oceedings of the IEEE Conference on Decision and Contr ol (CDC) , pp. 2617–2622, 2013. [9] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging, ” Systems and Control Letters , v ol. 53, no. 1, pp. 65–78, September 2004. [10] A. G. Dimakis, S. Kar, J. M. F . Moura, M. G. Rabbat, and A. Scaglione, “Gossip algorithms for distrib uted signal processing, ” Pr oceedings of the IEEE , vol. 98, no. 11, pp. 1847–1864, November 2010. [11] J. Liu, S. Mou, A. S. Morse, B. D. O. Anderson, and C. Y u, “Deterministic gossiping, ” Proceedings of the IEEE , vol. 99, no. 9, pp. 1505–1524, September 2011. [12] J. Tsitsiklis, “Problems in decentralized decision making and com- putation, ” Ph.D. dissertation, Massachusetts Institute of T echnology , Cambridge, MA, Cambridge, 1984. [13] T . C. A ysal, M. Coates, and M. Rabbat, “Distrib uted av erage consensus using probabilistic quantization, ” IEEE/SP W orkshop on Statistical Signal Pr ocessing , pp. 640–644, 2007. [14] J. Lavaei and R. M. Murray , “Quantized consensus by means of gossip algorithm, ” IEEE T ransactions on A utomatic Contr ol , vol. 57, no. 1, pp. 19–32, January 2012. [15] A. Kashyap, T . Basar , and R. Srikant, “Quantized consensus, ” Auto- matica , vol. 43, no. 7, pp. 1192–1203, 2007. [16] P . Frasca, R. Carli, F . Fagnani, and S. Zampieri, “ A verage consensus on networks with quantized communication, ” International Journal on Robust and Nonlinear Control , vol. 19, no. 16, pp. 1787–1816, November 2009. [17] M. E. Chamie, J. Liu, and T . Basar, “Design and analysis of distributed av eraging with quantized communication, ” IEEE T ransactions on Automatic Control , vol. 61, no. 12, pp. 3870–3884, December 2016. [18] K. Cai and H. Ishii, “Quantized consensus and av eraging on gossip digraphs, ” IEEE T ransactions on Automatic Contr ol , vol. 56, no. 9, pp. 2087–2100, September 2011. [19] G. S. Seyboth, D. V . Dimarogonas, and K. H. Johansson, “Event-based broadcasting for multi-agent average consensus, ” Automatica , vol. 49, no. 1, pp. 245–252, January 2013. [20] C. Nowzari and J. Cort ´ es, “Distributed event-triggered coordination for av erage consensus on weight-balanced digraphs, ” Automatica , August 2014. [21] Z. Liu, Z. Chen, and Z. Y uan, “Event-triggered average-consensus of multi-agent systems with weighted and direct topology , ” Journal of Systems Science and Complexity , vol. 25, no. 5, pp. 845–855, October 2012. [22] R. Carli, F . Fagnani, A. Speranzon, and S. Zampieri, “Communication constraints in the av erage consensus problem, ” Automatica , vol. 44, no. 3, pp. 671–684, 2008. [23] A. Nedic, A. Olshevsky , A. Ozdaglar, and J. Tsitsiklis, “On distributed av eraging algorithms and quantization effects, ” IEEE T ransactions on Automatic Control , vol. 54, no. 11, pp. 2506–2517, November 2009. [24] A. I. Rikos, T . Charalambous, and C. N. Hadjicostis, “Distributed weight balancing ov er digraphs, ” IEEE T ransactions on Contr ol of Network Systems , vol. 1, no. 2, pp. 190–201, June 2014.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment