Fastest Distributed Consensus Problem on Fusion of Two Star Networks
Finding optimal weights for the problem of Fastest Distributed Consensus on networks with different topologies has been an active area of research for a number of years. Here in this work we present an analytical solution for the problem of Fastest D…
Authors: Saber Jafarizadeh
Fastest Distributed Consensus Pr oblem on Fusion of Two Star Networks Saber Jafarizadeh Sharif University of Technology Department of Electrical Engineering Azadi Ave, Tehran, Iran jafarizadeh@ee.sharif.edu saber.jafarizadeh@gmail.com Abstract Finding optimal weights for the pro blem of Fastest Distribute d Consensus on networks wit h different topologies has been an active area of research for a num ber of years. He re in this work we present an analytical sol ution for the problem of Fastest Distri buted Consens us for a netw ork formed fr om fusion of two differe nt symme tric star networ ks or in ot her words a network c onsists of t wo differ ent symme tric star networks which share the sa me central node. The solution pro cedure consists of stratification of associated connectivity graph o f network and Sem idefinite Progr amming (SDP), particularly solving th e slackness conditions, where the op timal weights are obtained b y inductive co mparing of the ch aracteristic polynomials initiated by slackness cond itions. Some numerical simulations are carried out to investigate the trade-off betwe en the param eters of tw o fused star net works, nam ely the lengt h and num ber of branc hes. Keywords: Fastest distributed consensus, Weight optimization, Sensor networks, Second largest eigenvalue modulus, Semidefinite pr ogramm ing, Distributed detection, I. I NTRODUCTION Distributed computation in the context of computer science is a well studied field with an extensiv e body of literature (see, for example, [1] for early work), where some of its applications include distributed agreement, synchronization problems, [2] and lo ad balancing in parallel computers [3, 4]. A problem that has received renewed interest re cen tly is distributed consensus averaging algorithms in sensor networks and one of main research directi ons is the computation of the optimal weights that yield the fastest convergence rate to the asymptotic solution [5, 6, 7], known as Fastest Distributed Consensus averaging Algorithm, which computes itera tively the global average of distributed data in a sensor network by using only local communications. Moreover alg orithms for distributed consensus find applications in, e.g., multi-agent distributed coordi nation and flocking [8, 9 , 10, 11] , distributed data fusion in sensor networks [12, 13, 6], fastest mi xing Markov c hain problem [14], clustering [15, 16] gossip algorithms [17, 18], and dist ributed estimation and detection for decentralized sensor networks [19, 20, 21, 22, 23]. Most of the methods proposed so far usually avoid the direct computation of optimal weights and deal with the Fastest Distributed Consensus problem by numerical convex optimization methods and in general no closed-form solution for finding Fastest Di stributed Consensus has b een offered so far except in [1, 24, 25], where for the path network the conject ured optimal weights [3] has been proved in [1] , and in [25], the author has solved Fast est Distributed Consensus problem analytically for Path network using semidefinite programming without any assumption or conjecture, also in [24] the author proposes an analytical solution for Fastest Distributed Consensu s problem over complete cored and symmetric star networks. Here in this work, we aim to solve Fastest Di stributed Consensus problem for the fusion of two symmetric star networks called Two Fused Star (TFS ) network or in other words a network consists of two different symmetric star networks which share th e same central node, by means of stratification and Semidefinite Programming (SDP), particularly so lving the slackness conditions, where the optim al weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions. The simulation results confirm that the di stributed consensus algorithm with optimal weights converges substantially faster than the one with other simple weighting methods, namely maximum degree, Metropolis and best constant weighting met hods; moreover we have investigated the tradeoff between the parameters of network and convergence rate by numerical results. The organization of the paper is as follows. Section II is an overview of the materials used in the development of the paper, including relevant con cepts from distributed consensus averaging algorithm, graph symmetry and semidefinite programming. Sec tions III contains the proposed method and main results of the paper, namely the exact determination of optimal weights for fastest distribution consensus algorithm via stratification and SDP in TFS network. Section I V presents simulations demonstrating improvement of the obtai ned optimal weights over other weighting methods and tradeoff between the parameters of network and sec tion V concludes the paper. II. P RELEMINARIES This section introduces the notation used in the pape r and reviews relevant concepts from distributed consensus averaging algorithm, graph symmetry and semidefinite programming. 2 A. Distributed Consensus We consider a network with the associated graph consisting of a set of nodes and a set of edges where each edge is an unordered pair of distinct nodes. , , 0 0 0 ,…, 0 1 ⁄ ∑ 0 1 , 1 ,…, 0,1,2, … 0 , 1 0 0,1,2, … s ⁄ 0 i.e., lim l i m 0 Each node holds an initial scalar value , and denotes the vector of initial node values on the network. Within the netw ork two node s can communicate with each other, if and only if they are neighbors. The main purpose of distributed consensus averaging is to compute the average of the initial values, via a distributed algorithm, in which the nodes only c ommunicate with their neighbors. In this work, we consider distributed linear iterations, which have the form where is the discrete time index and is the weight on at node and the weight matrix have the same sparsity pattern as the adjacency matrix of the network’s associated graph or if , this iteration can be written in vector form as (1) The linear iteration (1) implies that for . We want to choose the weight matrix o that for any initial value 0 , converges to the average vector 0 ⁄ 0 lim (2) (Here denotes the column vector with all coefficients one). This is equivalent to the matrix equation (3) Assuming (2-3) holds, the convergence factor can be defined as 3 1 where denotes the spectral norm, or maxim um singular value. The FDC problem in terms of the convergence factor can be expressed as the following optimization problem: · min . . li m ∞ ⁄ , , : 0 min max , . . , , (4) where is the optimization variable, and the network is the problem data. In [5 ] it has been shown that the necessary and sufficient conditions for the matrix equation (3) to hold is that one is a simple eigenvalue of associated with the eigenvector 1 , and all other eigenvalues are strictly less that one in magnitude. Moreover in [5] FDC problem has been formulated as the following minimization problem W : 0 1 1 max , min . . – ⁄ , , (5) Where are eigenvalues of arranged in decreasing order and is the Second Largest Eigenvalue Modulus ( SLEM ) of , and the main problem can be formulated in the semidefin ite programming form as [5]: : 0 , , , can be written as disjoint union of distinct orbits. In [26], it has been shown that the weights on the edges within an orbit must be the same. (6) We refer to problem (6) as the Fastest Dist ributed Consensus (FDC) averaging problem. B. Symmetry of graphs An automorphism of a graph is a permutation of such that if and only if , the set of all such permutations, with composition as the group operation, is called the automorphism group of the graph and denoted by . For a vertex , the set of all images , as varies through a subgroup , is called the orbit of under the action of . The vertex set 4 SDP is a particular type of convex optimization pro blem [27]. An SDP pr oblem requires minimizing trix inequality constraint [28]: . . 0 (7) where is a given vector, ,…, , and ∑ , for some fixed hermitian matrices . The inequality sign in 0 means that is positive semi-definite. th e primal pro whose c pr nstraint and if they satisfy , t Due . . 0 (8) Here the variable is the real sy mmetric (or Hermitian) positive matrix , and the data , are the sa s in the primal problem. Correspondingly, m atrix satisfying the constraints is called dual feasible (or a primal (dual) feasible point is an upper (lower l problem is that one can prove that , and under relatively lowing sible C. Semidefinite Programming (SDP) a linear function subject to a linear ma min , This problem is called blem. Vectors omponents are the variables of the oblem and satisfy the co 0 are called primal feasible points, 0 hey are called strictly feasible points. The minimal objective value is by convention denoted by and is called the primal optimal value. to the convexity of the set of feasible points, SDP has a nice duality struct ure, with the associated dual program being: max me a strictly dual feasible if 0 ). The maximal objective value of , i.e. the dual optimal value is denoted by . The objective value of ) bound on . The main reason why one is interested in the dua mild assumptions, we can have . If the equality holds, one can pr ove the fol optimality condition on . A primal feasible and a dual fea are optimal, which is denoted by and , if and only if 0 . (9) This latter condition is called the complementary slackness condition. 5 In one way or another, numerical methods for so lving SDP problems always exploit the inequality , where and are the objective values for any dual feasible point and primal feasible point, respectively. The difference 0 2 3 3 2 | 1 | ,1 , ,2 ,…, , , 1 , 1 ,…, 1, , 0,0 , 1,1 , 1,2 ,…, 1, , 2,1 ,…, , is called the duality gap. If the equality holds, i.e. the optimal dualit y gap is zero, then we say that strong duality holds. III. TWO FUSED STAR (TF S) NETWORK In this sectio n we solve the Fastest Dist ributed Consensus (FDC) averaging algorithm for Two Fused Star (TFS) network consisting of two different symm etric star networks which share the same central node, by means of stratification and Semidefinite Programming (SDP). A. Two Fused Star (TFS) Network TFS Network consisting of path formed branches called tails with two different lengths, and where the numbers of branches are and , respectively and the tails are connect ed to one node called central node, we call the whole network TFS network. (see Fig.1. for , , , ). The connectivity graph of TFS network has | nodes and | edges, where the set of nodes is denoted by . 2 w 2 w 2 w 3 − w 3 − w 2 − w 2 − w 1 − w 1 − w 1 w 1 w 1 w Fig.1. Stratums of weighted TFS for . 2 , 3 , 3 , 2 1 ,…, , ,…, 2 3 3 2 Automorphism of TFS graph is permutation of tails, hence according to subsection II-B it has class of edge orbits, thus it suffices to consider just weights (as labeled in Fig. 1. for , , , ) and consequently the weight matrix is defined as 6 , ,, 1, … , , 1 , … , 2 1, 1 , … , , 0 0, 1 , … , , 1 1, … , , 1 2, … , 1 , 1 , … , 1 1 ,…, 1 , 1 ,…, 1 0 1 1, … , 1, 1, … , 1 , 1, … , 0 , 1, 1 0 1 , … , , 1, … , where and are 1 1 an mn vectors with on -th and -th , , , , , , , , 1 , , 1 , , 1 , , 1 , , 1 , , Denoting the -th vertex orbit (called -th stratum) under the permutation by Γ i, µ : i, µ m , the vertex set of TFS graph can be written as the disjoint union of strata Γ as Γ µ 1 3 , 2 , 2 , 3 and 1 1 map set of o , Introducing the orthonormal basis , for , , … , … , d 1 colu e in the position respectively and zero elsewhere, the weight matri x can be written as: O i, 1 ,i m ,…, B. Stratification of TFS Network Using Stratification method introduced in [26, 29, 30, 31, 32, 33] , the TFS graph can be stratified into a disjoint union of strata as shown in Fig. 1. for . In each strata (except for Γ ), the unitary DFT matrices of size rthonormal vectors in strata to a new set of orthonor mal vectors defined as 1 √ , ,…, 1 0 , … , 1 , 0 1 √ 7 , 1 , … , 0 , … , 1 where and . Considering new basis , ,…, 1 ; 0 ,…, 1 , , 1 ,…, ; 0 ,…, 1 , the weight matrix has the matrix elements in the new basis as provided in Appendix A. Therefore the weight matrix has the following blo ck diagonal form in the new basis. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 where , , are provided in Appendix A. The eigenvalues of can be obtained from diagonalization of the matrices, , and . Introducing as 0 0 (10) is su ix ng uchy Interlacing Theorem , Theorem 1 ( Cauchy Interlacing Theorem ) [34]: Let and be and matrices, where , is called a compression of if there exists n orthogonal projection onto a subspace of dimension such that . The Cauchy interlacing , then for all , c , n 1 , we and ′ . In the case of 1 rix does not include and reduces to and consequently difference between dimensions of and will be more than one and Cauchy interlacing theorem will lear that sult holds for 1 and , thus the followings are true for , 2 . while considering the fact that a bmatr of and usi Ca a theorem states that If the eigenvalues of are , and those of are Noti e that whe have we can state the following coro llary for the eigenvalues of , the weight mat not be true. It is c the same re Corollary 1 , 8 If we consider and given in (A-1) and (10) respectively, then theorem 1 implies the following relations between the eigenvalues of and 1 rom this result that the eigenvalues and are amongst the eigenvalues f C. Determination of Optimal Weights for FDC Algorithm in TFS Ne twork via SDP one c ress FDC pro id ming as: where is a 1 1 column vector defined as: 1 1 It is obvious f o and , respectively. Based on the corollary 2 and subsection II-A, an exp blem for TFS network in the form of sem efinite program min . . ′ (11) f o r 1 ,…, 1f o r 1 for 2 ,…, 1 (12) which is eigenvector of corresponding to the eigenvalue one. and can be written as a linear combination of rank one matrices, 1 (13) 2 2 (14) where for ,…, the vectors and are 1 1 and 1 column vectors, respectively, as provided in Appendix B. Usi ng (13) and (14), the constraints in (11) can be written as 2 2 1 9 2 ′ ′ 2 ′ ′ 0 (15-a) ′ ′ ′ ′ 2 2 1 1 0 (15-b) In order to formulate problem (11) in the form of standard semidefinite programming described in section II-C, we define , and as be low: ⁄ 0 0 0 for ,…, , 1,0,1 1 0 0 , 1 0 0 , 1 , 0 , ,…, , 0 , ,…, , ,…, , 2 ,…, 2 , , ,2 ,…, 2 and in the dual case we choose the dual variable 0 as · nd elements, respectively. Obviously 6) choice of implies that it is positive definite. From the complementary slackness condition (9) we have 0 (16) where and are column vectors with 1 a (1 10 0 (1 0 (17-b) 0 which implies that 0 Using the constraints we have , ,…, , 1 , 1 (1 1 (19- 1 1 To have the strong duality we set 0 0 , hence we have onsidering the linear independence of and for ,…, , we can expand and in terms (21-a) (21-b) with the coordinates and , ,…, to be determined. Using (13) and (14) and the expans ions (21), while considering (18), the slackness conditions (17), can be written as 1 2 , 7-a) ′ Multiplying both sides of equation ( 17-a) by we have (18) 9-a) b) (19-c) (20) (21) C of and as (22-a) 11 1 1 , (22-b) 1 1 , 1 2 , 1 , 1 , where (22-a) and (23-a) hold for ,…, and 1 , 1 . Considering (19), (22) and (23), we obtain 1 1 , (24) for ,…, , or equivalently (22-c) (23-a) (23-b) (23-c) (25) for , , and for and , we have , (26-a) , (26-b) where and are the Gram matrices, defined as , , a a ing (26) in (23) we have with nd s provided in Appendix B. Subs titut 12 1 2 (27-a) (27-b) 1 2 1 2 1 1 (27-c) 1 1 (27-d) 1 2 (27-e) 1 2 and 1 2 12 (28-b) 12 (27-f) (27-g) (28-a) 1 (28-c) 1 (28-d) 12 (28-e) 1 2 where √ 2 (28-f) (28-g) 1 1 , √ 2 , √ 2 , √ 2 and (27-b) and (28-b) hold for 1 ,…, 1 , , 2, 1,1,2 . e can determine ( SLEM ), the optimal weights and the coordinates nd , in in 1 2 ⁄ and 0 , where the latter is not acceptable. Assuming c o s and substituting 1 2 ⁄ in (27-a) and (28-a), we have Now w a an inductive manner as follows: In the first stage, from comparing equations (27-a) and (28-a) and considering the relation (25), we can conclude that 1 2 1 2 (29) which results 13 sin 2 sin sin 2 sin (30-a) 1 sin 1 sin (30-b) Continuing the above procedure inductiv ely, up to stages, and assuming , 1 sin 1 sin and 1 1 2 sin 1 sin for the -th stage, we get the following equations from comparison of equations (27-b) and (28-b), sin sin (31-a) 12 sin 1 sin sin sin 1 2 sin 1 sin 1 2 sin 1 sin 1 2 (31-b) while considering relation ( 25) we can conclude that which results in (32) Substituting in (31), we have 1 2 ⁄ 14 sin 2 sin sin 2 sin (33-a) 2 ,…, 3 2 1 2 (33-b) Since the equations (27-b) and (28-b) does not hold for , the results in (32) and (33) are true for , and in the -th stage, from comparing equations (27- c) and (28-c) and considering the relati on (25), we can conclude that (34) sin sin (35-a) sin sin 1 2 (35-b) The same inductive procedure can be used to obtain the weights with positive indices, simply by using equations (27-b), (28-b), (27-f), (28-f), (27-g) and (28-g) and relation (25), which results in (36) sin 2 sin (37-a) sin sin sin 2 sin (37-b) (37-c) sin sin 3 , … , , , , , , , , , , (37-d) where (36) and (37-a) and (37-c) are true for . Using equations (33), (35) and (37) we can express and in terms of , and substituting the results in equations (27-d), ( 27-e), (28-d), and (28-e) we have: 15 1 1 sin sin 1 sin ( 1 1 sin sin 1 38-a) sin (38-b) 1 sin sin 1 (38-c) 1 sin sin 1 (38-d) om (38-c) and (38-d) we can conclude that 1 sin sin s i n 1 fr (39-a) 1 sin sin s i n 1 (39-b) ubstituting (39) in (38-a) and (38-b) , we obtain S 2 sin s i n 1 1 sin 1 √ sin (40-a) 2 sin s i n 1 1 sin 1 √ sin (40-b) here by substituting c o s in (40), we can conclude that has to satisfy the following relation 2 w / 2 1 2 co co co t t co t t / 2 1 1 (41) the case of , (symmetric star) equation (41) reduces to 2 c o s 1 In 2 2 c o s 1 2 (42) which is in agreement with the results of [24]. 16 Also one should notice that necessary and sufficient c onditions for the convergence of weight matrix are satisfied, since all roots of which are th e eigenvalues of are strictly less than one in magnitude, and one is a simple eigenvalue of associ ated with the eigenvector , to support this fact we have computed numerically the roots of equation (41) whereby considering the relation c o s a ll root of nd that a s (41) are simple, we can conclud that ss t nd in addition the smallest and sec t ots of are listed in Table .1. for different values of , , and . 2 n e ro all roots of are strictly le han one in magnitude a ond larges , , , d Largest Eigenvalue Smallest Eigenvalue (3,4,4,3) 0.9545 -0.9445 (10,20,20,10) 0.997739 -0.997739 (100,200,200,100) 72 -0.9999772 0.99997 Table. 1. Seco nd Largest Eigenval ue and Smal lest Eigenvalue of T FS network for different values o f , , and As it is obvious from the results depicted in Table. 1. the SLEM of TFS network increases with the length f branches of network which is due to the topology of TFS network and in the case of optimum weights, the second largest eigenva same absolute values. ting methods by rical results. Table. 2. SLEM of TFS network for optimal weights, Maximum degree, Metropolis and best constant weighting methods as iffe FS netw Opt hts Max Degree Metropolis Best Constant o lue and the smallest eigenvalue have the IV. S IMULATION R ESULTS The aim of this section is to show the improvement of optimal weights obtained in section III over other weighting methods, namely maximum degree, Metropolis and best constant weigh evaluating SLEM numerically for different weighting met hods, moreover we have investigated the tradeoff between the parameters of network and convergence rate by nume In h been depicted for d rent sizes of T ork. , , , imal Weig (3,4,4,3) 0.95450 0.98277 0.97194 0.97089 (3,4,3,6) 0.96497 0.95381 0.98019 0.97195 (10,20,20,10) 0 0.99962 .99774 0.99981 0.99884 Table. 2. SLEM of TFS net work for o ptimal weig hts, maxim u m degree, Metropolis and best con stant weighting methods. Now we compare a TFS network with its equivalent Sy mmetric Star network. To do so, we define the total number of branches and the average length of branches , of the equivalent symmetric star, in term of parameters of TFS network as follows: 17 , . max | cos | 6 1 2 6 1 2 2 2 2 (43) where and is obtained from numerical solution of (41) and (42) for TFS network and its equivalent symmetric star network, respectively. In Fig. 2. SLEM of TFS network and its equivalent Symmetric Star network are depicted in terms of the average length of branches , for and . Fig.2. SLEM of TFS netw ork and its equivalent Sy mmetric St ar network i n terms of i nteger for and . 0 5 10 15 0. 9 0. 92 0. 94 0. 96 0. 98 1 T h e A v era g e L en g th of B ra n ch es, m SLEM SLE M of E qui val ent Symmetr i c Star Netw or k SL E M of T FS N e tw ork As it is clear from Fig. 2. for all values of , SLEM of the equivalent Symmetr ic Star network is smaller than SLEM of other TFS networks with the same average length of branches, which in turn means that the equivalent Symmetric Star network converges f aster than the other TFS networks with the same average length of branches. In Fig. 3. SLEM of TFS network is depicted in terms of the length of branches and , for and . 18 Fig.3. SLEM of TFS netw ork in term s of the le ngth of branc hes and , for and . 2 2 2 2 2 2 2 2 2 0 5 10 15 0 5 10 15 0. 92 0. 94 0. 96 0. 98 1 m2 m1 SLEM It is obvious from Fig. 3. that SLEM increases with faster than . In Fig. 4. the weight of edges which are connecting br anches of first star of TFS network to the central node is depicted in terms of the length of branc hes and , for and . Fig.4. in terms of and , for and . 0 5 10 15 0 5 10 15 0 0. 2 0. 4 0. 6 0. 8 m2 m1 W- 1 19 It is obvious from Fig. 4. t hat increases with while decrea ses with , also it is interesting that the critical line in the curve of Fig. 4. is the average length of branches defined in (43). . , V. C ONCLUSION Fastest Distributed Consensus averaging Algorithm in sensor netwo rks has received renewed interest recently, but Most of the methods proposed so far usua lly avoid the direct computation of optimal weights and deal with the Fastest Distributed Consensus problem by num erical convex optimization m ethods. Here in this work, we have solved Fastest Di stributed Consensus problem for TFS network by means of stratification and Semidefinite Programming (SDP). Our approach is based on fulfilling the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions. The si mulation results confirm that the distributed consensus algorithm with optimal weights converges substantially faster than the one with other sim ple weighting methods, namely maximum degree, Metr opolis and best consta nt weighting methods; moreover we have investigated the tradeoff between the parameters of network and convergence rate by numerical results. We believe that this method is powerful and lucid enoug h to be extended to other networks with more general topologies, nam ely star networks with more different types of branches, which is the object of future investigations. A PPENDIX A E LEMENTS OF W EIGHT M ATRIX IN THE B ASIS D EFINED VIA S TRATIFICATION 1 , , f or ; 0, … , 1 , 1 , , f or 1, … , 2; 0, … , 1 , 1 , f or 1; 1, … , 1 , 1 , , f or 1; 0 , 1 , , f or 0 , 1 , , for 1; 0 1 , , fo r 1; 1, … , 1 , 1 , , fo r 2, … , ; 0, … , 1 , 1 , for ; 0, … , 1 1 0 0 1 0 0 0 0 1 20 1 0 0 1 0 1 0 00 1 0 1 0 (A-1) D EFINITION OF V ECTORS AND AN IR C ORRESPONDING G RAM M ATRICES AND ,…, and are defined as: for ,…, 2 , 1 √ 2 A PPENDIX B D T HE For the vectors ⁄ for 1 1 √ 2 ⁄ for 2 0 Otherwise , 1 √ 2 ⁄ for 1 1 √ 2 ⁄ for 2 0 Otherwise for 1 , 1 1 1 for for 1 0 Otherwise , 1 0 Otherwise for 1 , 1 1 for 1 1 for 2 0 Otherwise , 1 1 0O t h e r w i s e for 2 , … , , 1 √ 2 ⁄ for 1 √ 2 ⁄ for 1 , 1 √ 2 0 Otherwise ⁄ for 1 1 √ 2 ⁄ for 0 Otherwise Considering and defined as above, and are 21 1 for ,…, 1 2 ⁄ for , 1 ,…, 1 , 2 , 1 , 1 1 2 ⁄ for 1 1 ,…, , 1,1,2 1 2 1 ⁄ for , 2, 1 , 1, 2 1 2 1 ⁄ for , 1,2, 2,1 1 1 for , 1,1 , 1, 1 0 Otherwise , 1 for ,…, 1 2 ⁄ for 1 ,…, 1 , 2 , 1 , 1 1 2 ⁄ for 1 1 ,…, , 1,1,2 1 √ 2 ⁄ for , 2, 1 , 1, 2 1 √ 2 ⁄ for , 1,2 , 2,1 0 Otherwise Load balancing and Poisson equation in a g raph,” Concurrency and Co mputation: Practice and buted memory multiprocessors,” Journal of Parallel an d Distributed n for distrib uted averagin g,” System s and Control Lette rs, vol. 53, pp. 6 5– S. Lall, “A scheme for r obust distri buted senso r fusion based on average c onsensus,” Int . is weights,” the 4t h rest neighbo r Transactions on rks of agents wi th switching top ology and tim e- ordination, consensus and flocking,” i n Proc. IEEE Conf. Decision Contr., Eur. Contr. Conf ., Dec. 2005, pp. 2996–3000. R EFERENCES [1] S. Boyd, P. Diaconis, J. Suny , and L. Xiao, “Fastest mi xing Markov c hain on a path,” The Am erican Mathematical Monthly, vol. 113, no. 1, pp. 70–74, Jan. 200 6. [2] D. Bertsekas, a nd J.N. Tsitsi klis, “Parallel a nd distribute d comput ation: Num erical Methods ,” Prentice Hall , 1989. [3] J. Boillat, “ Experience, vol. 2, pp.289 - 313, 1990 . [4] G. Cybenko, “Load balancing for distri Computing, vol. 7, pp.279 - 30 1, 1989. [5] L. Xiao, and S. Boyd, “Fast linear iteratio 78, Sept. 2 004. [6] L. Xiao, S. B oyd, and Conf. on In formati on Processing in Se nsor Net works, pp. 63–70, Ap ril 2005, L os Angele s. [7] L. Xiao, S. Boyd, and S. Lall, “Dis tributed average consensus with tim e -varying m etropol International Conferen ce on Information Processing in Sensor Networks, Los Angeles, April 2005 . [8] A. Jadbabaie, J . Lin, and A. S. Morse. “C oordination of groups of mobi le autonom ous agents using nea rules,” IEEE Transactions Automatic Co ntrol, vol. 48, no. 6, pp. 988 - 1001, June 2003. [9] L. Moreau, “Stability of multiagent systems with tim e-dependent communi cation links,” IEEE Automatic Contro l, vol. 50, pp. 169 - 182, 2005. [10] R. Olfati-Saber, R. M. Murr ay, “Consens us problems i n netwo delays,” IEEE Transactions on Au tom atic Control, vol. 49, no. 9, pp. 1520 - 1533, Septem ber 2004. [11] V. D. Blo ndel, J. M. Hendrickx, A. Olshevsky, and J. N. Tsitsiklis, “Con vergen ce in multiagent co 22 [12] A. Olshevsky and J. Tsitsiklis, “Convergence rates in di stributed consensus and averaging,” in P roc. IEEE Conf. Decision Contr., San Di ego, C A, Dec. 2006. [13] S. Kar and J. M. F. Moura, “Distrib uted average conse nsus in sensor net works with random link failures,” in IEEE Markov chain on a graph,” SIA M Review, vol. 46(4), pp. 667– r based wirele ss sensor netwo rks with rabhakar, D. Shah, “Randomi zed gossip algorithm s ,” IEEE Transactions on Information s for consensus,” IEEE Trans. nal Processi ng, H. V. P oor and J. B. Thomas, E ds., ensor networks using the embedded 2004, pp. 405–413. r a bandwidth constrained ad hoc sens or le ion, vo l. 20, no. 2, pp. 79 2 - 819, 2009 . ation,” Cam bridge Unive rsity Press, 2004. 311-331, Ju ly 2008. (073303) Int. Conf. Acoust., Sp eech, Signal Process. (I CASSP), Apr. 2007. [14] S. Boyd, P. Diaconis, and L. Xiao, “Fastest mixing 689, December 2004. [15] F. Bai, A. Jamalipour, “Perform ance evaluation of optimal sized cl uste correlated data aggregation consideration,” 33rd IEEE Co nfe rence on Local Computer Ne tworks (LC N), 14-17 Oct. 2008 pp.24 4 – 251. [16] H. Nakayama, N. Ansari, A. Jamalipour, N. Kato, “Fault-resilient sensing in wireless sensor networks,” Com puter Communications archive, Vol. 30, pp. 2375-2 384, Sept. 2004. [17] S. Boyd, A. Ghosh, B. P Theory, vol. 52, no 6, pp. 2508–2530 , June 2006. [18] T. Aysal, M. E. Yildiz, A. D. Sarwate, and A. Scaglione , “Broa dcast gossip algorithm on Signal Pro cessing, vol. 57, no . 7, pp. 2748 – 2761, July 2009 . [19] J. N. Tsitsik lis, “Decentralized detectio n,” in Adva nces in Sig vol. 2, pp. 297–344. JAI Pr ess, 1993. [20] V. Delouille, R. Neelamani, R. Baraniuk, “Robust distri buted estimation in s polygons algorith m,” in Proceedings of The Th ird Interna tional Conferen ce on Information Pr ocessing in Sens or Networks, Berkeley, Califor nia, USA, April [21] M. Alanyali, S. Venkatesh, O. Savas, S. Aeron, “Distri buted Bayesi an hypothesis t esting in senso r networks,” i n Proceedings of Am erican Control Conference, Bo ston, Mas sachusetts, June 2004, pp. 5369–5374. [22] Z. Q. Luo, “An isotropic universal decentralized estimation sc heme fo network,” IEEE J. Selected Areas Comm unication, vol. 23, no. 4, pp. 735 – 744, Apr. 20 05. [23] D. P. Spano s, R. Olfati-Saber, R. M. Murray, “Distributed Kalman filtering in sensor network s with quantifiab performance,” in Proceeding s of The Fo urth Internatio nal Conference on I nformation P rocessing in Senso r Networks, Los An geles, Californi a, USA, April 200 5. [24] S. Jafarizadeh, “Exact determination of optimal weights for fastest distributed consensus ,” arX iv: 1001.4278 [25] S. Jafarizad eh, “Exact Determination o f Optimal Weigh t s for Fastest Distributed Consensus Algorithm in Path Network vi a SDP ,” arXi v: 1002.072 2 [26] S. Boyd, P. Diaconis, P. Parri lo, L. Xiao, "Fastest mixing Mark ov chain on gra phs with symm etries," SIAM Jour nal of Optimizat [27] L. Vand enberghe, S. Boyd, “Semidefinite pr ogramming” SIAM Rev. vol. 38, pp. 49 - 95 1996. [28] S. Boyd, L. Vanderber ghe, “Conve x Optim iz [29] M.A. Jafarizadeh, R. Sufiani, S. Jafarizadeh, “Calcu lati ng Effective Resi stances on Underlying Networks of Association Sch eme”, Journal of Math ematical Physics, vol. 49, pp . 23 24 um walk by usi ng r Resistor ks Based on Bose-Mesner Algebr a and Christoffel-Darbo ux identity”, Journal of Mathematical Ph ysics, vol. gular [30] M.A. Jafarizadeh, R. Sufiani, S. Salimi, S. Jafariza de h “Invest igation of continuo us-time qua nt Krylov subsp ace-Lanczos alg orithm”, The European Phys ical Journal B, Vol. 59, no . 2, pp. 199- 217, Sep. 2 007. [31] M.A. Jafarizadeh, R. Sufiani, S. Jafarizadeh, “Calcula ting two-poin t Resistances in Distance-Regula Networks”, Journal of Physics A: Mathematical and Theoretical, Vol. 40 , pp. 4949-4972, May 2007. [32] M.A. Jafarizadeh, R. Sufiani, S. Jafa riza deh, “Recursive Calculation of E ff ective Resistances in Distance-Regular Networ 50, 2009. (023302) [33] S. Jafarizadeh, R. Sufiani, M.A. Jafarizadeh, “Evalua tion of Effective Resistances in Pseudo-Distance-Re Resistor Networ ks”, Journal of Statis tical Physics. Publ ished online: 0 5 Jan. 2010, DOI 10.1 007/s1095 5-009-9909 -8 [34] G. H. Golub and C. F. Van Loan, Matrix Computations, 2n d ed., Johns Hopkins University Press, Baltimore, 1989 .
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment