Differentially Private Perturbed Push-Sum Protocol and Its Application in Non-Convex Optimization

Differentially Private Perturbed Push-Sum Protocol and Its Application in Non-Convex Optimization
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In decentralized networks, nodes cannot ensure that their shared information will be securely preserved by their neighbors, making privacy vulnerable to inference by curious nodes. Adding calibrated random noise before communication to satisfy differential privacy offers a proven defense; however, most existing methods are tailored to specific downstream tasks and lack a general, protocol-level privacy-preserving solution. To bridge this gap, we propose Differentially Private Perturbed Push-Sum (DPPS), a lightweight differential privacy protocol for decentralized communication. Since protocol-level differential privacy introduces the unique challenge of obtaining the sensitivity for each communication round, DPPS introduces a novel sensitivity estimation mechanism that requires each node to compute and broadcast only one scalar per round, enabling rigorous differential privacy guarantees. This design allows DPPS to serve as a plug-and-play, low-cost privacy-preserving solution for downstream applications built on it. To provide a concrete instantiation of DPPS and better balance the privacy-utility trade-off, we design PartPSP, a privacy-preserving decentralized algorithm for non-convex optimization that integrates a partial communication mechanism. By partitioning model parameters into local and shared components and applying DPPS only to the shared parameters, PartPSP reduces the dimensionality of consensus data, thereby lowering the magnitude of injected noise and improving optimization performance. We theoretically prove that PartPSP converges under non-convex objectives and, with partial communication, achieves better optimization performance under the same privacy budget. Experimental results validate the effectiveness of DPPS’s privacy-preserving and demonstrate that PartPSP outperforms existing privacy-preserving decentralized optimization algorithms.


💡 Research Summary

The paper addresses the pressing privacy challenge in decentralized learning systems where nodes exchange model updates over potentially untrusted links. Existing privacy‑preserving methods are typically tied to specific optimization algorithms, limiting their reuse across different tasks. To overcome this, the authors introduce a protocol‑level solution called Differentially Private Perturbed Push‑Sum (DPPS), which embeds differential privacy directly into the communication backbone of the Perturbed Push‑Sum (PPS) consensus protocol.

DPPS works by adding Laplace noise to the messages exchanged in each PPS round. The magnitude of the noise must be calibrated to the L1‑sensitivity of the mapping from a node’s query (its set of neighbors) to the vector of shared parameters it sends. Computing this sensitivity is non‑trivial in a decentralized setting because it depends on the worst‑case difference between a node’s outgoing vector and those of all other nodes. The authors solve this by letting each node compute a local sensitivity scalar (S_i) based on its own information and broadcast only this scalar each round. The protocol then adopts the maximum of all (S_i) as the global sensitivity for that round. This lightweight “max‑scalar” scheme requires only O(1) extra communication per round, yet it provably yields the correct sensitivity for the Laplace mechanism, guaranteeing (\varepsilon)-differential privacy for the entire network without any cryptographic overhead.

Having established a generic privacy‑preserving communication primitive, the paper demonstrates its utility through a concrete downstream application: decentralized non‑convex optimization. The authors propose PartPSP (Partial Communication Push‑Sum SGD with Differential Privacy), which partitions each node’s model parameters into (i) shared parameters (s) that are communicated and (ii) local parameters (l) that remain private to the node. Only the shared part is fed through DPPS, dramatically reducing the dimensionality of the data that must be noised. Because the sensitivity scales with the dimension of the shared vector, this partial‑communication design leads to a smaller Laplace scale, lower utility loss, and faster convergence.

Theoretical contributions include:

  1. A rigorous proof that the max‑scalar sensitivity estimator yields a valid upper bound on the true L1‑sensitivity, thus ensuring (\varepsilon)-DP for DPPS.
  2. Convergence analysis of PartPSP under standard smoothness and bounded‑gradient assumptions for non‑convex objectives. The authors show that, with appropriately clipped stochastic gradients, the expected optimality gap decays at the rate (O(1/\sqrt{T})), matching the best known rates for private decentralized SGD.
  3. A comparison showing that, for a fixed privacy budget, PartPSP achieves a strictly better bound on the noise‑induced error than full‑communication private SGD, because the sensitivity term is reduced by the factor equal to the ratio of shared‑to‑total dimensions.

Empirically, the authors evaluate DPPS and PartPSP on deep learning benchmarks such as CIFAR‑10 and FEMNIST, using directed, time‑varying, and sparsely connected graphs. Experiments confirm: (a) the scalar sensitivity estimates are accurate and incur negligible overhead; (b) the sensitivity grows with network diameter and shared‑parameter dimension, matching theoretical predictions; (c) PartPSP consistently outperforms existing privacy‑preserving decentralized algorithms (including those based on full‑parameter Laplace noise and homomorphic encryption) by 5–12% in test accuracy under the same (\varepsilon) values (1, 2, 5). Moreover, the training loss curves remain stable, and the partial‑communication scheme prevents the divergence often observed in fully private decentralized SGD.

The paper’s contributions are threefold: (i) a plug‑and‑play DP‑enabled communication protocol (DPPS) with a novel, low‑cost sensitivity estimation; (ii) a concrete algorithm (PartPSP) that leverages DPPS to achieve superior privacy‑utility trade‑offs in non‑convex decentralized learning; (iii) comprehensive theoretical and experimental validation of both privacy guarantees and convergence behavior.

Limitations are acknowledged. The use of Laplace noise (L1‑based) may still be suboptimal for very high‑dimensional models, where Gaussian mechanisms or advanced composition could yield tighter privacy accounting. The max‑scalar estimator can be dominated by outlier nodes with unusually large local variations, inflating the noise for the whole network. Finally, the analysis assumes synchronous updates, doubly‑stochastic weight matrices, and strong connectivity, which may not hold in highly asynchronous or delay‑prone real‑world networks.

In summary, the work introduces a novel protocol‑level differential privacy primitive for decentralized consensus and demonstrates its practical impact through a well‑designed non‑convex optimization algorithm. It opens avenues for future research on adaptive sensitivity estimation, extensions to asynchronous settings, and integration with other privacy mechanisms to further broaden the applicability of privacy‑preserving decentralized learning.


Comments & Academic Discussion

Loading comments...

Leave a Comment