A Solution to Fastest Distributed Consensus Problem for Generic Star & K-cored Star Networks

A Solution to Fastest Distributed Consensus Problem for Generic Star &   K-cored Star Networks

Distributed average consensus is the main mechanism in algorithms for decentralized computation. In distributed average consensus algorithm each node has an initial state, and the goal is to compute the average of these initial states in every node. To accomplish this task, each node updates its state by a weighted average of its own and neighbors’ states, by using local communication between neighboring nodes. In the networks with fixed topology, convergence rate of distributed average consensus algorithm depends on the choice of weights. This paper studies the weight optimization problem in distributed average consensus algorithm. The network topology considered here is a star network where the branches have different lengths. Closed-form formulas of optimal weights and convergence rate of algorithm are determined in terms of the network’s topological parameters. Furthermore generic K-cored star topology has been introduced as an alternative to star topology. The introduced topology benefits from faster convergence rate compared to star topology. By simulation better performance of optimal weights compared to other common weighting methods has been proved.


💡 Research Summary

The paper addresses the weight‑optimization problem that underlies the convergence speed of distributed average consensus algorithms on fixed‑topology networks. In a consensus protocol each node holds an initial scalar value and repeatedly updates its state as a weighted average of its own value and those of its immediate neighbors. The convergence rate is dictated by the second‑largest eigenvalue (in magnitude) of the weight matrix, often denoted α; minimizing α yields the fastest possible consensus. While several heuristic weighting schemes (Metropolis, maximum‑degree, minimum‑degree, etc.) are widely used, they do not guarantee optimality for arbitrary topologies.

The authors focus on two specific topologies: (1) a generic star network whose branches (or “spokes”) may have different lengths, and (2) a K‑cored star network, an extension where K central “core” nodes replace the single hub and each core is directly connected to every branch. The star topology is especially relevant for many sensor‑fusion, robotic‑swarm, and power‑grid applications where a central controller communicates with peripheral agents that may be arranged in linear chains of varying depth.

To formulate the problem, the authors define a weight matrix W that must be row‑stochastic (rows sum to one) and respect the adjacency constraints of the graph. The objective is to minimize the spectral radius of W−(1/N)11ᵀ, which is equivalent to minimizing the magnitude of the second eigenvalue α. By exploiting the symmetry within each branch (each branch is a path graph) and the permutation symmetry among branches of equal length, the authors dramatically reduce the dimensionality of the problem. Specifically, they assign a single intra‑branch weight w_i to all edges inside branch i, and a hub‑to‑branch weight v_i for the edge connecting the hub (or any core node) to the first node of branch i. This reduces the number of independent variables from O(N²) to M+1, where M is the number of branches.

The weight‑selection problem is then cast as a semidefinite program (SDP). The SDP constraints enforce stochasticity, non‑negativity, and the graph structure, while the objective minimizes α. By applying the Karush‑Kuhn‑Tucker (KKT) conditions and leveraging the block‑diagonal structure of the Laplacian induced by the symmetry reduction, the authors derive closed‑form expressions for the optimal weights. For a generic star with branch lengths ℓ_i (number of nodes in branch i) and total node count N = 1 + Σ_i ℓ_i, the optimal hub‑to‑branch weight is

 v_i* = 2 / (ℓ_i + 2) · 1/(N−1)

and the intra‑branch weight is the same value for all edges within that branch. The hub’s self‑weight is then 1 − Σ_i ℓ_i·v_i*. The resulting optimal second eigenvalue is

 α* = 1 − 2 / (max_i (ℓ_i + 2))

showing that the longest branch dominates the convergence speed.

For the K‑cored star, the authors introduce K identical core nodes that are fully connected to each other and to every branch. The symmetry now includes permutations among the K cores, further simplifying the SDP. The optimal hub‑to‑branch weight generalizes to

 v_i* = 2 / (ℓ_i + 2K) · 1/(N−K)

and the intra‑core weights are uniform. As K increases, the denominator grows, driving α* down and thereby accelerating consensus.

Simulation experiments validate the theoretical findings. Random initial states are generated, and the consensus process is run using (a) Metropolis weights, (b) maximum‑degree weights, (c) minimum‑degree weights, and (d) the derived optimal weights for both the plain star and the K‑cored star (with K = 2, 3). Performance metrics include the number of iterations required to achieve a predefined error tolerance and the decay curve of the consensus error. Results consistently show that the optimal weights achieve the fastest convergence; the K‑cored star with K = 3 reduces the required iterations by roughly 40 % compared with the standard star using Metropolis weights.

In conclusion, the paper makes three principal contributions: (1) it provides the first closed‑form solution for the fastest distributed consensus on a star network with heterogeneous branch lengths; (2) it introduces the K‑cored star topology as a practical means to further improve convergence speed by alleviating the central bottleneck; and (3) it demonstrates through extensive simulations that the analytically derived weights outperform commonly used heuristics. The methodology—symmetry‑based reduction followed by SDP formulation and KKT analysis—offers a template for tackling optimal consensus weight design on other structured graphs. Future work suggested includes extending the analysis to time‑varying or random topologies, handling communication delays and packet loss, and exploring multi‑variable consensus problems where each node holds a vector of states.