Multi-Agent Route Planning as a QUBO Problem
Multi-Agent Route Planning considers selecting vehicles, each associated with a single predefined route, such that the spatial coverage of a road network is increased while redundant overlaps are limited. This paper gives a formal problem definition, proves NP-hardness by reduction from the Weighted Set Packing problem, and derives a Quadratic Unconstrained Binary Optimization formulation whose coefficients directly encode unique coverage rewards and pairwise overlap penalties. A single penalty parameter controls the coverage-overlap trade-off. We distinguish between a soft regime, which supports multi-objective exploration, and a hard regime, in which the penalty is strong enough to effectively enforce near-disjoint routes. We describe a practical pipeline for generating city instances, constructing candidate routes, building the QUBO matrix, and solving it with an exact mixed-integer solver (Gurobi), simulated annealing, and D-Wave hybrid quantum annealing. Experiments on Barcelona instances with up to 10 000 vehicles reveal a clear coverage-overlap knee and show that Pareto-optimal solutions are mainly obtained under the hard-penalty regime, while D-Wave hybrid solvers and Gurobi achieve essentially identical objective values with only minor differences in runtime as problem size grows.
💡 Research Summary
The paper addresses the Multi‑Agent Route Planning (MaRP) problem, where a fleet of vehicles each follows a pre‑computed route and the goal is to maximize unique coverage of a road network while limiting redundant overlap. After formally defining the problem, the authors prove NP‑hardness by a polynomial‑time reduction from Weighted Set Packing, showing that any optimal MaRP solution must correspond to a disjoint set packing when the overlap penalty λ exceeds the sum of all coverage rewards.
They then translate the objective
max ∑ ui xi − λ∑ cij xi xj
into a standard Quadratic Unconstrained Binary Optimization (QUBO) model. The QUBO matrix Q has diagonal entries Qii = −ui (encouraging selection) and off‑diagonal entries Qij = λ cij (penalizing simultaneous selection of overlapping routes). Minimizing xᵀQx is equivalent to the original maximization.
Two regimes for the penalty parameter λ are introduced. In the “hard” regime λhard = 1 + ∑ui forces near‑disjoint solutions, while the “soft” regime λsoft is computed adaptively from instance statistics (median of per‑vehicle overlap sums and coverage rewards) to allow a balanced trade‑off.
A full experimental pipeline is built using real‑world data from Barcelona. Road networks are extracted with OSMnx, vehicle origins/destinations are sampled, and routes are generated via the Valhalla routing engine. Instances range from 100 to 10 000 vehicles, producing QUBO matrices with up to millions of non‑zero entries but high sparsity.
Three solvers are evaluated: (1) Gurobi solves the binary quadratic program exactly (with a 10‑minute wall‑clock limit), (2) simulated annealing using D‑Wave’s neal library, and (3) D‑Wave’s hybrid quantum annealer that combines quantum sampling with classical post‑processing. Metrics include total unique coverage, total overlap, overlap distribution, and structural properties of the induced sub‑graph.
Results reveal a clear “knee” in the coverage‑overlap curve as λ increases. In the hard‑penalty regime, solutions are essentially overlap‑free and lie on the Pareto frontier; both Gurobi and the D‑Wave hybrid achieve identical objective values, and their runtimes scale similarly with problem size. Simulated annealing yields slightly lower quality solutions and longer runtimes for the largest instances.
The study demonstrates that MaRP can be naturally expressed as a QUBO, that the penalty parameter effectively controls the multi‑objective trade‑off, and that hybrid quantum annealing offers competitive performance to state‑of‑the‑art classical solvers on realistic, large‑scale urban routing problems. This opens avenues for applying quantum‑inspired optimization to city logistics, delivery fleets, and traffic management.
Comments & Academic Discussion
Loading comments...
Leave a Comment