A decentralised forward-backward-type algorithm with network-independent heterogenous agent step sizes
Consider the problem of finding a zero of a finite sum of maximally monotone operators, where some operators are Lipschitz continuous and the rest are potentially set-valued. We propose a forward-backward-type algorithm for this problem suitable for decentralised implementation. In each iteration, agents evaluate a Lipschitz continuous operator and the resolvent of a potentially set-valued operator, and then communicate with neighbouring agents. Agents choose their step sizes independently using only local information, and the step size upper bound has no dependence on the communication graph. We demonstrate the potential advantages of the proposed algorithm with numerical results for min-max problems and aggregative games.
💡 Research Summary
This paper addresses the decentralized optimization problem of finding a zero in the sum of a finite number of maximally monotone operators, where some operators are Lipschitz continuous and others are potentially set-valued. This formulation captures a broad class of problems including distributed min-max optimization and aggregative games, where data or computation is partitioned across a network of agents.
The authors begin by critically reviewing existing decentralized forward-backward algorithms like PG-EXTRA, NIDS, and the Malitsky-Tam algorithm. They identify three key limitations prevalent in these methods when applied to the general monotone inclusion problem: 1) the requirement for single-valued operators to be co-coercive (a strong condition not satisfied by, e.g., saddle-point operators), 2) the necessity for agents to use identical (homogeneous) step-sizes, preventing them from adapting to local function characteristics, and 3) step-size upper bounds that depend on global spectral properties of the communication network graph, requiring non-local knowledge.
To overcome all three limitations simultaneously, the paper proposes a novel forward-backward-type algorithm (Algorithm 1). In each iteration, agent i performs a local computation involving an extragradient-like evaluation of its Lipschitz operator B_i and the resolvent of its set-valued operator A_i, followed by communication with its immediate neighbors in the network graph. The core innovation lies in the algorithm’s structure: agents choose their step-sizes α_i independently based solely on their local Lipschitz constant L_i (with the bound α_i < 1/(8L_i)), and this bound is completely independent of the network’s connectivity or mixing matrix. This is achieved by evaluating the forward operator B_i at an auxiliary variable y^k rather than the primary variable x^k, a subtle but crucial modification derived from the “backward-forward-reflected-backward” algorithmic framework.
The convergence analysis rigorously proves that the sequence generated by the algorithm converges to a solution of the original problem. The proof technique involves reformulating the decentralized problem as a monotone inclusion on a suitably defined product Hilbert space and showing that the proposed iteration is an instance of a known convergent method in that space.
Numerical experiments validate the theoretical claims and demonstrate the algorithm’s advantages. In distributed min-max problems, the proposed method converges reliably even when the saddle-point operators are not co-coercive, a scenario where PG-EXTRA fails. Simulations on aggregative games show that allowing heterogeneous step-sizes can lead to faster convergence, especially when agents have widely varying local Lipschitz constants. The paper further illustrates practical utility through an application to the coordination of a virtual power plant and discusses a memory-efficient implementation specifically for aggregative games.
In summary, this work provides a significant advancement in decentralized optimization by offering a method that is more general (requiring only Lipschitz continuity), more flexible (allowing agent-specific step-sizes), and more practical (requiring no global network knowledge for step-size tuning) than prior state-of-the-art algorithms for solving monotone inclusions over networks.
Comments & Academic Discussion
Loading comments...
Leave a Comment