Game Theory with Costly Computation
We develop a general game-theoretic framework for reasoning about strategic agents performing possibly costly computation. In this framework, many traditional game-theoretic results (such as the existence of a Nash equilibrium) no longer hold. Nevertheless, we can use the framework to provide psychologically appealing explanations to observed behavior in well-studied games (such as finitely repeated prisoner’s dilemma and rock-paper-scissors). Furthermore, we provide natural conditions on games sufficient to guarantee that equilibria exist. As an application of this framework, we consider a notion of game-theoretic implementation of mediators in computational games. We show that a special case of this notion is equivalent to a variant of the traditional cryptographic definition of protocol security; this result shows that, when taking computation into account, the two approaches used for dealing with “deviating” players in two different communities – Nash equilibrium in game theory and zero-knowledge “simulation” in cryptography – are intimately related.
💡 Research Summary
The paper introduces a comprehensive game‑theoretic framework that explicitly incorporates the cost of computation into strategic decision‑making. Traditional game theory assumes that players can select any strategy at no cost and that the resulting payoff is the sole determinant of rational behavior. In many realistic settings—human decision makers, autonomous agents, or cryptographic protocols—executing a strategy requires algorithmic design, processing time, memory, and energy, all of which entail measurable expenses. By modeling these expenses, the authors reveal that many classic results, most notably the guaranteed existence of a Nash equilibrium, no longer hold automatically.
Model definition.
Each player i is endowed with a set of algorithms (A_i) rather than a static action set. An algorithm (a \in A_i) maps the observable game history (or other players’ moves) to a concrete action. A cost function (c_i(a) \ge 0) quantifies the computational resources required to run (a); this can be based on time complexity, memory usage, or any domain‑specific metric. The usual payoff function (u_i) is retained, but the player’s final utility becomes
\
Comments & Academic Discussion
Loading comments...
Leave a Comment