Distributed Optimization via Adaptive Regularization for Large Problems with Separable Constraints
Many practical applications require solving an optimization over large and high-dimensional data sets, which makes these problems hard to solve and prohibitively time consuming. In this paper, we propose a parallel distributed algorithm that uses an adaptive regularizer (PDAR) to solve a joint optimization problem with separable constraints. The regularizer is adaptive and depends on the step size between iterations and the iteration number. We show theoretical converge of our algorithm to an optimal solution, and use a multi-agent three-bin resource allocation example to illustrate the effectiveness of the proposed algorithm. Numerical simulations show that our algorithm converges to the same optimal solution as other distributed methods, with significantly reduced computational time.
💡 Research Summary
The paper addresses the growing need for efficient optimization methods that can handle massive, high‑dimensional data sets common in modern engineering and data‑science applications. Traditional centralized solvers quickly become infeasible when the problem size grows, prompting the development of distributed algorithms. However, most existing distributed schemes—such as ADMM, distributed sub‑gradient, and related consensus‑based methods—rely on fixed penalty or multiplier parameters. These static parameters are highly sensitive to initialization, can cause large oscillations in early iterations, and often impede fine‑tuning near the optimum, especially for very large‑scale problems.
To overcome these limitations, the authors propose a novel Parallel Distributed Adaptive Regularization (PDAR) algorithm. The key innovation is an adaptive regularizer that varies with both the step size between successive iterates and the iteration count. For each agent (i) at iteration (k), a regularization coefficient (\lambda_i^{k}) is defined as
\
Comments & Academic Discussion
Loading comments...
Leave a Comment