The Complexity of Multi-Mean-Payoff and Multi-Energy Games

The Complexity of Multi-Mean-Payoff and Multi-Energy Games

In mean-payoff games, the objective of the protagonist is to ensure that the limit average of an infinite sequence of numeric weights is nonnegative. In energy games, the objective is to ensure that the running sum of weights is always nonnegative. Multi-mean-payoff and multi-energy games replace individual weights by tuples, and the limit average (resp. running sum) of each coordinate must be (resp. remain) nonnegative. These games have applications in the synthesis of resource-bounded processes with multiple resources. We prove the finite-memory determinacy of multi-energy games and show the inter-reducibility of multimean-payoff and multi-energy games for finite-memory strategies. We also improve the computational complexity for solving both classes of games with finite-memory strategies: while the previously best known upper bound was EXPSPACE, and no lower bound was known, we give an optimal coNP-complete bound. For memoryless strategies, we show that the problem of deciding the existence of a winning strategy for the protagonist is NP-complete. Finally we present the first solution of multi-meanpayoff games with infinite-memory strategies. We show that multi-mean-payoff games with mean-payoff-sup objectives can be decided in NP and coNP, whereas multi-mean-payoff games with mean-payoff-inf objectives are coNP-complete.


💡 Research Summary

The paper investigates two fundamental classes of infinite-duration games that model the management of multiple resources: multi‑mean‑payoff games, where the long‑run average of each component of a weight vector must be non‑negative, and multi‑energy games, where the cumulative sum of each component must never drop below zero. These models are central to the synthesis of resource‑bounded reactive systems. The authors make four major contributions.

First, they prove finite‑memory determinacy for multi‑energy games. By introducing the notion of an “energy mask” they show that a player can keep all dimensions of the energy vector within safe bounds using a strategy that requires only polynomial‑size memory, despite the exponential blow‑up that naïve constructions would suggest.

Second, they establish an inter‑reducibility between multi‑mean‑payoff and multi‑energy games when strategies are restricted to finite memory. A multi‑mean‑payoff‑inf objective can be transformed into an equivalent multi‑energy objective by scaling the weights and choosing a sufficiently large initial credit; conversely, a multi‑energy objective can be expressed as a multi‑mean‑payoff‑sup objective by interpreting the energy levels as averages over long runs. This reduction preserves the memory bound of strategies, allowing results for one class to be transferred to the other.

Third, leveraging the reduction, the authors dramatically improve the known computational complexity for solving both game classes with finite‑memory strategies. The previous upper bound was EXPSPACE, and no lower bound had been established. By constructing polynomial‑size certificates for both winning and losing positions, they show that the decision problem lies in coNP, and they prove coNP‑hardness via a reduction from the complement of SAT. Consequently, solving finite‑memory multi‑mean‑payoff or multi‑energy games is coNP‑complete, which is optimal.

Fourth, they analyze the impact of memory restrictions. For memoryless (positional) strategies, the existence problem becomes NP‑complete: a winning positional strategy can be guessed and verified in polynomial time, while NP‑hardness follows from a reduction of 3‑SAT to the existence of a positive cycle in each dimension.

Finally, the paper tackles the previously open case of infinite‑memory strategies. For multi‑mean‑payoff games with a mean‑payoff‑sup objective, the problem reduces to checking the existence of a cycle whose average weight vector is component‑wise non‑negative. This can be decided both in NP (guess a cycle) and in coNP (prove no such cycle exists), placing the problem in NP∩coNP. In contrast, the mean‑payoff‑inf objective requires that all cycles have non‑negative average, which is coNP‑complete. The authors provide explicit reductions and algorithmic sketches for both cases, delivering the first complete complexity classification for infinite‑memory multi‑mean‑payoff games.

Overall, the work unifies the complexity landscape of multi‑dimensional quantitative games, clarifies the role of memory, and opens avenues for future research on stochastic extensions, partial observation, and real‑time constraints.