Primal-dual algorithm for distributed optimization: A dissipativity-based perspective
We study a continuous-time primal-dual algorithm for distributed optimization with nonconvex local cost functions over weight-unbalanced digraphs, and analyze its performance from a dissipativity-based perspective. We first reformulate the algorithm as a Lure type system, consisting of a linear subsystem that relies on the communication topology and the algorithm gains, and a static nonlinear gradient feedback. We then show that the linear subsystem is dissipative with respect to a suitable supply rate, while the nonlinear feedback is not passive. Finally, we establish that, by properly selecting the gains or appropriately designing the communication network, this algorithm converges to an equilibrium at an exponential rate, and thus, achieves an optimal solution to the distributed problem. This work provides new insights into the roles of the network topology, algorithm gains, and cost functions in the performance of a distributed algorithm, and complements existing results from a different viewpoint.
💡 Research Summary
The paper investigates a continuous‑time primal‑dual algorithm for solving a distributed optimization problem in which each agent possesses a possibly non‑convex local cost function, while the sum of all local costs is assumed to be μ‑strongly convex. The agents communicate over a directed, weight‑unbalanced graph that is only required to be strongly connected.
The authors first rewrite the algorithm as a Lur’e system, i.e., an interconnection of a linear subsystem and a static nonlinear feedback. The linear part captures the network topology through the graph Laplacian L_G and the algorithm gains (α, β, γ). The nonlinear part is the gradient difference Δ(y)=∇̄f(y+ y*)−∇̄f(y*), where ∇̄f_i(x_i)= (1/r_i)∇f_i(x_i) and r is the left eigenvector of L_G associated with the zero eigenvalue (normalized so that 1ᵀr=1).
A storage function V=½ ηᵀPη (η=
Comments & Academic Discussion
Loading comments...
Leave a Comment