Distributed Algorithms for Consensus and Coordination in the Presence of Packet-Dropping Communication Links - Part I: Statistical Moments Analysis Approach
This two-part paper discusses robustification methodologies for linear-iterative distributed algorithms for consensus and coordination problems in multicomponent systems, in which unreliable communication links may drop packets. We consider a setup where communication links between components can be asymmetric (i.e., component j might be able to send information to component i, but not necessarily vice-versa), so that the information exchange between components in the system is in general described by a directed graph that is assumed to be strongly connected. In the absence of communication link failures, each component i maintains two auxiliary variables and updates each of their values to be a linear combination of their corresponding previous values and the corresponding previous values of neighboring components (i.e., components that send information to node i). By appropriately initializing these two (decoupled) iterations, the system components can asymptotically calculate variables of interest in a distributed fashion; in particular, the average of the initial conditions can be calculated as a function that involves the ratio of these two auxiliary variables. The focus of this paper to robustify this double-iteration algorithm against communication link failures. We achieve this by modifying the double-iteration algorithm (by introducing some additional auxiliary variables) and prove that the modified double-iteration converges almost surely to average consensus. In the first part of the paper, we study the first and second moments of the two iterations, and use them to establish convergence, and illustrate the performance of the algorithm with several numerical examples. In the second part, in order to establish the convergence of the algorithm, we use coefficients of ergodicity commonly used in analyzing inhomogeneous Markov chains.
💡 Research Summary
The paper addresses the problem of achieving average consensus in multi‑agent systems whose communication links are directed, possibly asymmetric, and subject to random packet drops. In the ideal (loss‑free) case, a “double‑iteration” scheme—running two identical linear iterations in parallel, one initialized with the agents’ initial values and the other with a vector of ones—allows each node to compute the average of the initial values as the ratio of the two state variables. However, this approach relies on the transition matrix being column‑stochastic at every step, an assumption that fails when links drop packets.
To overcome this limitation, the authors propose a robustified double‑iteration algorithm. Each node i maintains three sets of auxiliary variables: (1) its internal state, (2) the cumulative “broadcast mass” it has transmitted up to the current time, and (3) for every in‑neighbor ℓ, the cumulative “received mass” from ℓ. The broadcast mass is the sum of past internal states weighted by the inverse of i’s out‑degree; the received mass from ℓ is updated only when a packet from ℓ successfully arrives—otherwise it stays unchanged. This design ensures that lost packets do not create an imbalance in the mass balance of the network.
At each discrete time step, every node broadcasts the same message to all its out‑neighbors (no per‑neighbor tailoring). Receiving nodes identify the sender from the packet header and update their received‑mass registers accordingly. The internal state update for each iteration is a linear combination of (i) the node’s own previous state weighted by the inverse out‑degree and (ii) the differences between the most recent received masses from each in‑neighbor, also weighted by the inverse out‑degree. Two such iterations run concurrently: the first with the original initial values, the second with all‑ones. The ratio of the two state vectors converges almost surely to the true average of the initial conditions.
The convergence proof proceeds in two stages. First, the authors analyze the first and second statistical moments of the stochastic process formed by the two iterations. By taking expectations, the mean dynamics are shown to evolve under a fixed column‑stochastic matrix (the expectation of the random transition matrices), guaranteeing preservation of the global average under the strong‑connectivity assumption. Second, the covariance (second‑moment) analysis demonstrates that the random packet‑drop process introduces a contraction factor at each step, driving the variance of the state differences to zero. Consequently, all nodes’ state ratios converge to the same limit with probability one. This moment‑based approach differs from prior work that relies on weak ergodicity or coefficients of ergodicity of products of time‑varying stochastic matrices.
The algorithm does not require acknowledgments, retransmissions, or knowledge of the in‑neighbors’ identities beyond the out‑degree, making it suitable for bandwidth‑constrained wireless sensor networks, robot swarms, or distributed energy resources. It also tolerates self‑packet drops, allowing for intermittent node‑processing faults.
Simulation experiments on various directed strongly‑connected graphs (ring, complete, random) and packet‑drop probabilities ranging from 0 % to 30 % confirm the theoretical findings: the average error decays rapidly, and the final consensus value matches the true average within numerical precision.
In summary, the paper introduces a practical, broadcast‑based consensus protocol that remains functional under asymmetric, unreliable links, and provides a rigorous moment‑analysis proof of almost‑sure convergence to average consensus. The second part of the work will extend the analysis using coefficients of ergodicity to handle more general time‑varying transition matrices.
Comments & Academic Discussion
Loading comments...
Leave a Comment