Chance-constrained Model Predictive Control for Multi-Agent Systems
We consider stochastic model predictive control of a multi-agent systems with constraints on the probabilities of inter-agent collisions. We first study a sample-based approximation of the collision probabilities and use this approximation to formulate constraints for the stochastic control problem. This approximation will converge as the number of samples goes to infinity, however, the complexity of the resulting control problem is so high that this approach proves unsuitable for control under real-time requirements. To alleviate the computational burden we propose a second approach that uses probabilistic bounds to determine regions with increased probability of presence for each agent and formulate constraints for the control problem that guarantee that these regions will not overlap. We prove that the resulting problem is conservative for the original problem with probabilistic constraints, ie. every control strategy that is feasible under our new constraints will automatically be feasible for the original problem. Furthermore we show in simulations in a UAV path planning scenario that our proposed approach grants significantly better run-time performance compared to a controller with the sample-based approximation with only a small degree of sub-optimality resulting from the conservativeness of our new approach.
💡 Research Summary
The paper addresses the problem of safely planning the trajectories of multiple agents—such as robots or unmanned aerial vehicles (UAVs)—under stochastic dynamics while guaranteeing that the probability of inter‑agent collisions stays below a user‑specified threshold. The authors formulate a stochastic model predictive control (MPC) problem in which each agent’s future state over a finite horizon is a random vector, the control inputs are deterministic, and chance constraints limit both the probability of leaving a feasible region and the probability of a pairwise collision.
Two solution approaches are presented. The first is a sample‑based approximation of the collision probabilities. By drawing a large number of Monte‑Carlo samples from the agents’ state distributions, the empirical collision frequency can be estimated and inserted as a linear constraint in a mixed‑integer linear program (MILP). This method is theoretically exact in the limit of infinite samples and works for arbitrary (non‑Gaussian) noise, but the number of required samples grows quickly with the horizon length and the number of agents. Consequently, the resulting MILP contains a combinatorial explosion of binary variables and constraints, making real‑time execution infeasible.
To overcome this computational bottleneck, the second approach introduces “Regions of Increased Probability of Presence” (RIPP). For each agent, the mean and covariance of its predicted state are used to construct a conservative geometric region (typically an ellipsoid or sphere) that contains the agent with at least a prescribed probability (e.g., 95 %). This construction relies on classical probability inequalities such as Chebyshev’s bound and does not assume any particular distribution shape. Collision avoidance is then enforced by requiring that the RIPP of any two agents do not overlap. The non‑overlap condition can be expressed as a set of linear (or mixed‑integer linear) constraints, yielding a much smaller MILP. The authors prove that any control sequence satisfying the RIPP non‑overlap constraints also satisfies the original chance‑collision constraints; thus the RIPP formulation is a conservative but safe approximation.
The paper includes a detailed description of the linear dynamics model, the sampling procedure, and the MILP formulation for both approaches. A theoretical complexity analysis shows that the RIPP‑based problem scales polynomially with the number of agents and horizon length, whereas the sample‑based formulation scales exponentially.
Experimental validation is performed on a UAV path‑planning scenario with three agents subject to non‑Gaussian wind turbulence. The sample‑based MPC, using several thousand samples, requires on average 1.8 seconds per planning step, far exceeding typical real‑time requirements (≤0.5 s). In contrast, the RIPP‑based MPC solves the same problem in about 0.12 seconds on a standard desktop computer, while keeping the empirical collision probability below the 5 % limit. The RIPP method incurs only a modest loss of optimality (approximately 2 % higher cost) compared with the sample‑based method, but delivers a dramatic improvement in computational speed.
The authors position their work as the first practical MPC framework that handles chance constraints on pairwise collisions for multi‑agent systems. They highlight that the RIPP approach does not require Gaussian assumptions, works with arbitrary disturbance distributions, and can be integrated into existing MILP solvers without extensive tuning. Future research directions suggested include adaptive RIPP sizing, distributed implementations, and extensions to nonlinear dynamics. Overall, the paper provides both a rigorous theoretical foundation and convincing empirical evidence that conservative probabilistic region‑based constraints enable safe, real‑time stochastic MPC for multi‑agent systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment