Energy Parity Games

Energy parity games are infinite two-player turn-based games played on weighted graphs. The objective of the game combines a (qualitative) parity condition with the (quantitative) requirement that the

Energy Parity Games

Energy parity games are infinite two-player turn-based games played on weighted graphs. The objective of the game combines a (qualitative) parity condition with the (quantitative) requirement that the sum of the weights (i.e., the level of energy in the game) must remain positive. Beside their own interest in the design and synthesis of resource-constrained omega-regular specifications, energy parity games provide one of the simplest model of games with combined qualitative and quantitative objective. Our main results are as follows: (a) exponential memory is necessary and sufficient for winning strategies in energy parity games; (b) the problem of deciding the winner in energy parity games can be solved in NP \cap coNP; and (c) we give an algorithm to solve energy parity by reduction to energy games. We also show that the problem of deciding the winner in energy parity games is polynomially equivalent to the problem of deciding the winner in mean-payoff parity games, while optimal strategies may require infinite memory in mean-payoff parity games. As a consequence we obtain a conceptually simple algorithm to solve mean-payoff parity games.


💡 Research Summary

Energy parity games (EPGs) are a class of infinite two‑player turn‑based games played on weighted directed graphs. Each vertex carries a priority (a natural number) and each edge a weight (an integer). A play starts from an initial vertex with an initial energy level c₀ ≥ 0. As the play proceeds, the energy level is updated by adding the weight of the traversed edge; the energy must never drop below zero. Simultaneously, the parity condition requires that the smallest priority seen infinitely often along the play be even. Thus a player wins only if both the quantitative energy constraint and the qualitative parity condition are satisfied forever.

The paper makes three central contributions. First, it establishes that exponential memory is both necessary and sufficient for winning strategies in EPGs. The authors construct families of graphs where any strategy that keeps the energy non‑negative while satisfying the parity condition must remember, for each priority level, how many times certain “dangerous” cycles have been taken. This bookkeeping forces a memory size that grows exponentially in the number of vertices. Conversely, they show how to synthesize a deterministic strategy using exactly that amount of memory, proving the tight bound.

Second, the decision problem “does player 1 have a winning strategy from a given initial configuration?” lies in NP ∩ coNP. The NP side is witnessed by a finite memory strategy together with a bound on the maximal energy needed; verification consists of checking that every reachable configuration respects the energy constraint and that the parity condition holds on the induced infinite path. The coNP side is witnessed by a counter‑strategy for player 2 that forces either a negative energy drop or a violation of the parity condition, which can also be checked in polynomial time. This result mirrors the known complexity of pure energy games and pure parity games, showing that their combination does not increase the worst‑case decision complexity.

Third, the authors present an algorithm that solves EPGs by reduction to ordinary energy games. The reduction proceeds by layering the original game according to priority levels: for each priority k a sub‑game Gₖ is built where the objective is to keep the energy non‑negative while never visiting a vertex of priority lower than k. These sub‑games are solved sequentially, starting from the highest priority, using any polynomial‑time energy‑game solver (e.g., value‑iteration or strategy‑improvement). The outcomes of the sub‑games are then combined to obtain a global winning region for the original EPG. The overall running time is polynomial in the size of the graph, multiplied by a factor exponential in the number of distinct priorities (which matches the exponential memory bound).

A further major insight is the polynomial‑time equivalence between EPGs and mean‑payoff parity games (MPPGs). By a standard “energy‑to‑mean‑payoff” transformation, an MPPG can be turned into an EPG with an appropriately scaled weight function, and vice versa. Consequently, the decision problems for the two models are inter‑reducible in polynomial time. However, the paper also shows a qualitative difference: optimal strategies for MPPGs may require infinite memory, because maintaining a long‑run average above a threshold can demand unbounded bookkeeping of past deviations, whereas EPGs admit finite (though exponential) memory strategies.

The authors discuss practical implications. The reduction to energy games means that existing, highly optimized energy‑game solvers can be reused for EPGs without substantial redesign. Although the theoretical memory requirement is exponential, many real‑world specifications involve a modest number of priorities and bounded weight ranges, making the approach feasible in practice. Moreover, the NP ∩ coNP membership suggests that, like parity and energy games, EPGs are unlikely to be NP‑hard, leaving open the possibility of a polynomial‑time algorithm.

In summary, the paper provides a comprehensive treatment of games that combine qualitative ω‑regular objectives with quantitative resource constraints. It settles the memory complexity (exponential), places the decision problem in NP ∩ coNP, offers a concrete reduction‑based algorithm, and clarifies the relationship with mean‑payoff parity games. These results deepen our theoretical understanding and open the door to efficient tool support for the synthesis of resource‑aware reactive systems.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...