Privacy-Preserving Dynamic Average Consensus by Masking Reference Signals
In multi-agent systems, dynamic average consensus (DAC) is a decentralized estimation strategy in which a set of agents tracks the average of time-varying reference signals. Because DAC requires exchanging state information with neighbors, attackers may gain access to these states and infer private information. In this paper, we develop a privacy-preserving method that protects each agent’s reference signal from external eavesdroppers and honest-but-curious agents while achieving the same convergence accuracy and convergence rate as conventional DAC. Our approach masks the reference signals by having each agent draw a random real number for each neighbor, exchanges that number over an encrypted channel at the initialization, and computes a masking value to form a masked reference. Then the agents run the conventional DAC algorithm using the masked references. Convergence and privacy analyses show that the proposed algorithm matches the convergence properties of conventional DAC while preserving the privacy of the reference signals. Numerical simulations validate the effectiveness of the proposed privacy-preserving DAC algorithm.
💡 Research Summary
The paper addresses a critical privacy vulnerability in dynamic average consensus (DAC) algorithms used in multi‑agent systems to track the average of time‑varying reference signals. Conventional DAC requires each agent to exchange its state estimate with its neighbors, which can be intercepted by external eavesdroppers or inferred by internal honest‑but‑curious agents, potentially revealing private information such as the agents’ reference trajectories. Existing privacy‑preserving approaches either inject differential‑privacy noise—thereby sacrificing exact convergence—or rely on heavyweight cryptographic schemes that impose substantial computational and communication overhead.
To overcome these limitations, the authors propose a lightweight masking technique that preserves the exact convergence properties of the original DAC while guaranteeing privacy against both external and internal adversaries. The method consists of two phases: (1) a one‑time mask generation phase, and (2) the standard DAC update phase applied to masked references.
During mask generation, each agent i draws an independent random real number η_ij for every neighbor j ∈ N_i. These numbers are exchanged over an encrypted channel, after which agent i computes its mask as
m_i = Σ_{j∈N_i} (η_ji – η_ij).
Because the sum of all masks equals zero (Σ_i m_i = 0), adding the mask to the local reference signal, x̃_i(t) = x_i(t) + m_i, does not alter the global average: (1/N) Σ_i x̃_i(t) = (1/N) Σ_i x_i(t). Moreover, the masks are constant in time, so ˙x̃_i(t) = ˙x_i(t). Consequently, the DAC update rule applied to the masked references becomes identical to the original rule:
˙ẑ_i(t) = ˙x_i(t) – β Σ_j a_ij (ẑ_i – ẑ_j),
where ẑ_i denotes the local estimate of the average. Since the underlying dynamics are unchanged, the Laplacian matrix L remains the same, and the convergence rate, governed by β·λ_2 (λ_2 being the second smallest eigenvalue of L), is identical to that of the unmasked algorithm. The steady‑state tracking error bound γ·β·λ_2, derived from Lemma 1, can be made arbitrarily small by selecting an appropriate gain β, exactly as in the conventional DAC.
Privacy analysis proceeds by constructing indistinguishable alternative signal‑mask pairs. For any zero‑sum vector s (Σ_i s_i = 0), define an alternative reference x′(t) = x(t) + s and a corresponding mask m′ = m – s. Both pairs generate the same masked signal x̃(t) = x(t) + m = x′(t) + m′, leading to identical transmitted messages and identical estimate trajectories. An external eavesdropper, whose information set consists solely of the adjacency matrix, gain β, and the observable estimates ẑ_i(t), cannot differentiate between the original and alternative pairs; thus the true reference trajectory is not uniquely recoverable.
For honest‑but‑curious agents, the analysis shows that each such agent only knows its own random numbers η_ij and the numbers received from its immediate neighbors η_ji. Because the mask of any other agent involves random numbers from at least one neighbor that is not colluding, the honest‑but‑curious set cannot solve for that agent’s mask without additional information. The condition that every target agent has at least one legitimate (non‑curious) neighbor guarantees privacy against internal collusion.
The authors validate their theory through simulations on a randomly generated 10‑node undirected graph with sinusoidal reference signals. Results demonstrate that the masked DAC achieves the same exponential convergence rate and steady‑state error as the unmasked DAC, while the eavesdropper’s reconstruction attempts fail. The only additional overhead is the one‑time encrypted exchange of random numbers; after that, the algorithm incurs negligible computational cost.
In summary, the proposed mask‑based privacy‑preserving DAC offers three key advantages: (1) exact preservation of convergence speed and accuracy of the original DAC, (2) strong privacy guarantees against both external eavesdroppers and internal honest‑but‑curious agents, and (3) minimal extra computational and communication burden limited to a single initialization step. This makes the approach highly suitable for real‑time applications such as distributed load sharing, microgrid voltage regulation, and coordinated control of networked battery energy storage systems, where both performance and data confidentiality are paramount. Future work may explore dynamic re‑masking under topology changes and integration with lightweight authentication mechanisms to further strengthen security.
Comments & Academic Discussion
Loading comments...
Leave a Comment