Robust Distributed Learning under Resource Constraints: Decentralized Quantile Estimation via (Asynchronous) ADMM

Robust Distributed Learning under Resource Constraints: Decentralized Quantile Estimation via (Asynchronous) ADMM
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Specifications for decentralized learning on resource-constrained edge devices require algorithms that are communication-efficient, robust to data corruption, and lightweight in memory usage. While state-of-the-art gossip-based methods satisfy the first requirement, achieving robustness remains challenging. Asynchronous decentralized ADMM-based methods have been explored for estimating the median, a statistical centrality measure that is notoriously more robust than the mean. However, existing approaches require memory that scales with node degree, making them impractical when memory is limited. In this paper, we propose AsylADMM, a novel gossip algorithm for decentralized median and quantile estimation, primarily designed for asynchronous updates and requiring only two variables per node. We analyze a synchronous variant of AsylADMM to establish theoretical guarantees and empirically demonstrate fast convergence for the asynchronous algorithm. We then show that our algorithm enables quantile-based trimming, geometric median estimation, and depth-based trimming, with quantile-based trimming empirically outperforming existing rank-based methods. Finally, we provide a novel theoretical analysis of rank-based trimming via Markov chain theory.


💡 Research Summary

The paper addresses a pressing need in decentralized learning on resource‑constrained edge devices: algorithms must be communication‑efficient, robust to corrupted data, and memory‑light. While gossip‑based methods excel at low communication cost, they typically rely on the mean and thus lack robustness. Existing asynchronous decentralized ADMM approaches for median estimation mitigate robustness issues but require each node to store a number of auxiliary variables proportional to its degree, which is prohibitive on devices with limited RAM.

To overcome these limitations, the authors propose AsylADMM (Asynchronous Lightweight ADMM), a novel gossip algorithm that estimates medians and arbitrary quantiles using only two variables per node, regardless of degree. The key technical contribution is a reformulation of the augmented Lagrangian that aggregates edge‑based dual variables into node‑level averages (\hat{z}_k) and (\hat{\mu}_k). In each iteration a node (i) computes the average of its neighbors’ current estimates (\hat{x}_k), (ii) forms a consensus value (\hat{z}_k = (\hat{x}_k + x_k)/2), (iii) updates its aggregated dual (\hat{\mu}_k \leftarrow \hat{\mu}_k + \rho(\hat{z}_k - x_k)), and (iv) performs a proximal step
\


Comments & Academic Discussion

Loading comments...

Leave a Comment