Mean-Payoff Pushdown Games

Mean-Payoff Pushdown Games
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Two-player games on graphs is central in many problems in formal verification and program analysis such as synthesis and verification of open systems. In this work we consider solving recursive game graphs (or pushdown game graphs) that can model the control flow of sequential programs with recursion. While pushdown games have been studied before with qualitative objectives, such as reachability and $\omega$-regular objectives, in this work we study for the first time such games with the most well-studied quantitative objective, namely, mean-payoff objectives. In pushdown games two types of strategies are relevant: (1) global strategies, that depend on the entire global history; and (2) modular strategies, that have only local memory and thus does not depend on the context of invocation, but only on the history of the current invocation of the module. Our main results are as follows (1) One-player pushdown games with mean-payoff objectives under global strategies is decidable in polynomial time. (2) Two-player pushdown games with mean-payoff objectives under global strategies is undecidable. (3) One-player pushdown games with mean-payoff objectives under modular strategies is NP-hard. (4) Two-player pushdown games with mean-payoff objectives under modular strategies can be solved in NP (i.e., both one-player and two-player pushdown games with mean-payoff objectives under modular strategies is NP-complete). We also establish the optimal strategy complexity showing that global strategies for mean-payoff objectives require infinite memory even in one-player pushdown games; and memoryless modular strategies are sufficient in two-player pushdown games. Finally we also show that all the problems have the same complexity if the stack boundedness condition is added, where along with the mean-payoff objective the player must also ensure that the stack height is bounded.


💡 Research Summary

The paper investigates two‑player games played on pushdown graphs—structures that naturally model the control flow of recursive programs—under the quantitative objective of mean‑payoff. A mean‑payoff objective evaluates the long‑run average of integer weights attached to transitions, thereby capturing performance or resource‑consumption criteria over infinite executions. The authors distinguish two classes of strategies. Global strategies may depend on the entire history of the play, thus allowing unbounded memory. Modular strategies, by contrast, are confined to the local history of the currently active module (or procedure) and ignore the calling context, reflecting the typical modular design of software.

The main contributions are fourfold. First, for one‑player pushdown games (i.e., where only Player 1 makes choices) under global strategies, the existence of a winning strategy for a given mean‑payoff threshold can be decided in polynomial time. The authors achieve this by transforming the pushdown game into a finite‑state abstraction enriched with linear constraints that capture the average weight condition, and then solving the resulting linear program. Second, when both players are present and global strategies are allowed, the problem becomes undecidable. This is shown by encoding the behavior of a Turing machine into the pushdown game; the unrestricted memory of a global strategy simulates the Turing tape, and the mean‑payoff condition is used to enforce halting behavior, yielding a reduction from the halting problem.

Third, the paper turns to modular strategies. It proves that even in the one‑player setting, determining whether a modular strategy can achieve a given mean‑payoff is NP‑hard. The reduction is from SAT: variables and clauses are represented by modules, and the mean‑payoff threshold forces a correspondence between satisfying assignments and feasible modular strategies. Fourth, for two‑player games with modular strategies the problem lies in NP, and together with the previous hardness result this establishes NP‑completeness for both the one‑player and two‑player modular cases. The key insight is that a modular strategy can be represented by a polynomial‑size certificate: a collection of memoryless local strategies for each module, which can be guessed and verified in polynomial time.

Beyond decision complexity, the authors analyze strategy memory requirements. They show that global strategies for mean‑payoff objectives may need infinite memory even in one‑player pushdown games, because the average weight must be adjusted based on arbitrarily long prefixes of the play. In contrast, modular strategies are sufficient with memoryless (or finite‑state) local policies; no additional memory beyond the current module’s state is required.

Finally, the paper studies the effect of imposing a stack‑boundedness condition, which requires that the stack height never exceed a fixed bound while the mean‑payoff objective is satisfied. They demonstrate that all the previously obtained complexity results remain unchanged under this extra restriction, indicating that the algorithms are robust to practical concerns such as preventing stack overflow.

Overall, the work extends the theory of pushdown games—previously limited to qualitative objectives like reachability or ω‑regular conditions—by introducing the most studied quantitative objective, mean‑payoff. It delineates a clear boundary between decidable and undecidable settings (global vs. modular strategies), pinpoints the exact computational complexity (polynomial, NP‑complete, undecidable), and clarifies the memory requirements of optimal strategies. These findings have immediate implications for the verification and synthesis of recursive programs where long‑run quantitative performance matters, and they open avenues for designing efficient approximation or heuristic methods for the NP‑complete modular cases.


Comments & Academic Discussion

Loading comments...

Leave a Comment