A distributed optimization-based approach for hierarchical model predictive control of large-scale systems with coupled dynamics and constraints
We present a hierarchical model predictive control approach for large-scale systems based on dual decomposition. The proposed scheme allows coupling in both dynamics and constraints between the subsystems and generates a primal feasible solution within a finite number of iterations, using primal averaging and a constraint tightening approach. The primal update is performed in a distributed way and does not require exact solutions, while the dual problem uses an approximate subgradient method. Stability of the scheme is established using bounded suboptimality.
💡 Research Summary
The paper proposes a hierarchical model predictive control (MPC) scheme tailored for large‑scale systems whose subsystems are coupled both dynamically and through constraints. The authors address the computational bottleneck of centralized MPC by decomposing the original optimization problem via dual decomposition, then solving the resulting dual problem with an approximate subgradient method while handling the primal variables through a distributed Jacobi algorithm.
Key to the approach is a constraint tightening step. For each sampling instant (t), a Slater vector (\bar u_t) that strictly satisfies the original constraints is identified, and a positive scalar (c_t) (smaller than the smallest margin of violation) is added to each constraint, yielding a tightened set (g’(u,x_t)\le0). This guarantees that any feasible solution of the tightened problem is also feasible for the original problem, while preserving the Slater condition for strong duality.
The dual problem is formed from the Lagrangian (L’(u,\mu)=f(u)+\mu^\top g’(u)) with non‑negative multipliers (\mu). Because the exact minimizer of the Lagrangian may be unavailable after a finite number of inner iterations, the authors introduce a tolerance (\delta) and work with a (\delta)-suboptimal primal point (\tilde u(\mu)). The subgradient of the dual function is simply the constraint residual (g’(\tilde u(\mu))), which can be computed without any differentiation of the dual function itself.
The algorithm is organized in two nested loops:
-
Outer loop (approximate subgradient) – Starting from (\mu^{(0)}=0), the multiplier is updated as (\mu^{(k+1)} = \Pi_{\mathbb{R}_+}\big(\mu^{(k)} + \alpha_t d^{(k)}\big)) where (d^{(k)} = g’(\tilde u^{(k)})) and (\alpha_t) is a step size chosen as (\alpha_t = \Delta_t / L’_t{}^2). The outer loop runs for a predetermined number (\bar k_t) of iterations, which is computed from a bound on the constraint violation (Eq. 38).
-
Inner loop (Jacobi distributed optimization) – For a given multiplier (\mu^{(k)}), each subsystem (i) solves a local convex problem (\min_{u_i\in\Omega_i} L’(u_1^{(p)},\dots,u_i,\dots,u_M^{(p)},\mu^{(k)})) while keeping the latest estimates of its neighbors’ variables fixed. This is the classic Jacobi method applied to a strongly convex quadratic Lagrangian. Convergence is linear with rate (\phi\in(0,1)) provided the Hessian blocks satisfy (\lambda_{\min}(H_{ii}) > \sum_{j\neq i}\bar\sigma(H_{ij})) (Eq. 39), i.e., inter‑subsystem coupling is sufficiently weak. The inner loop is executed for (\bar p_k) iterations, enough to guarantee that the resulting primal point satisfies the suboptimality tolerance (\epsilon_t = \Delta_t/2).
After the outer loop terminates, the algorithm returns the primal average (\hat u(\bar k_t)=\frac{1}{\bar k_t}\sum_{l=0}^{\bar k_t-1} u^{(l)}) as the control input. The averaging step is crucial: it ensures that even though each individual primal iterate may still violate the tightened constraints, the average satisfies them, and consequently the original constraints are also respected because of the tightening margin.
Stability analysis hinges on two inequalities. First, the averaged primal cost is bounded above by the optimal value of the tightened problem plus terms that vanish as the number of outer iterations grows (Eq. 33). Second, by selecting (\alpha_t) and (\epsilon_t) such that (\alpha_t L’_t{}^2/2 + \epsilon_t \le \Delta_t) (Eq. 34), the actual MPC cost at time (t) is guaranteed to be strictly smaller than the cost at the previous step (Eq. 14). Hence the cost function itself serves as a Lyapunov function, establishing closed‑loop input‑to‑state stability.
The paper also discusses practical aspects: the communication pattern is hierarchical—local controllers exchange only with their neighbors during the Jacobi updates, while a higher‑level coordinator distributes the global parameters (\alpha_t, \epsilon_t, \bar k_t, \bar p_k) and aggregates the dual variables. This architecture makes the method suitable for both hierarchical and fully distributed implementations.
Numerical illustrations (not reproduced here) demonstrate that with a modest number of outer ((\sim5)) and inner ((\sim10)) iterations, the scheme yields feasible, near‑optimal control actions for a benchmark large‑scale system, confirming the theoretical bounds.
In summary, the contribution lies in combining constraint tightening, primal averaging, and an approximate subgradient method to obtain a finite‑iteration, distributed MPC algorithm that guarantees feasibility, bounded suboptimality, and closed‑loop stability for systems with coupled dynamics and constraints. Future work suggested includes extending the approach to strongly coupled or nonlinear systems, incorporating communication delays, and exploring accelerated dual methods to further reduce the required number of iterations.
Comments & Academic Discussion
Loading comments...
Leave a Comment