Iterated Regret Minimization: A More Realistic Solution Concept
For some well-known games, such as the Traveler’s Dilemma or the Centipede Game, traditional game-theoretic solution concepts–and most notably Nash equilibrium–predict outcomes that are not consistent with empirical observations. In this paper, we introduce a new solution concept, iterated regret minimization, which exhibits the same qualitative behavior as that observed in experiments in many games of interest, including Traveler’s Dilemma, the Centipede Game, Nash bargaining, and Bertrand competition. As the name suggests, iterated regret minimization involves the iterated deletion of strategies that do not minimize regret.
💡 Research Summary
The paper tackles a well‑known discrepancy between classical game‑theoretic predictions—most prominently Nash equilibrium—and actual human behavior observed in laboratory experiments. Games such as the Traveler’s Dilemma, the Centipede Game, Nash bargaining, and Bertrand competition all exhibit outcomes that are far more cooperative or less extreme than the Nash prediction. The authors argue that this gap stems from the fact that real decision‑makers do not aim to maximize expected payoff under the assumption of perfect rationality; instead, they tend to avoid the worst‑case loss they would experience if they chose a suboptimal action. This intuition is formalized through the concept of regret: the difference between the payoff a player actually receives and the payoff they could have obtained by playing the best possible alternative given the opponents’ strategies.
The core contribution is a new solution concept called Iterated Regret Minimization (IRM). The construction proceeds in two steps. First, for a given finite normal‑form game, each player computes the maximum regret associated with each of his pure strategies, assuming the opponent may play any strategy in the current strategy set. Strategies that do not achieve the minimal possible maximum regret are eliminated. This yields a reduced set of “regret‑minimizing” strategies. Second, the elimination process is iterated: the reduced set becomes the new universe of strategies, the regret calculations are repeated, and any further non‑minimizing strategies are deleted. The iteration continues until no more strategies can be removed. The final surviving strategy profiles constitute the IRM solution.
The authors prove several important properties. In any finite game, the iterated deletion process converges after a finite number of steps, guaranteeing existence of at least one IRM profile. When the game possesses a unique Nash equilibrium that is also a regret‑minimizer, IRM coincides with that equilibrium; otherwise, IRM typically yields a set of strategies that are more “reasonable” from an experimental standpoint. The paper also shows that IRM satisfies consistency (the set of surviving strategies is a fixed point of the deletion operator) and closedness (the limit set is closed under the regret‑minimization operator). Moreover, the authors extend the definition to mixed strategies by using expected regret, allowing IRM to be applied to games with continuous strategy spaces.
To illustrate the explanatory power of IRM, the paper analyses four canonical examples:
-
Traveler’s Dilemma – The Nash equilibrium predicts that both players choose the lowest possible claim (e.g., 2), leading to a payoff far below the socially optimal outcome. IRM, after a few rounds of deletion, retains strategies around the middle of the claim interval (e.g., 30–40), matching the “moderate” claims that participants typically make in experiments.
-
Centipede Game – Nash equilibrium prescribes immediate defection at the first decision node, yet experimental subjects often cooperate for several rounds before stopping. IRM eliminates early‑defect strategies only after the later stages have been shown to generate high regret, resulting in a profile where cooperation persists for a number of moves before a rational exit, closely mirroring observed behavior.
-
Nash Bargaining – The classic solution (the Nash product) can be interpreted as a symmetric compromise, but experimental data reveal a tendency toward asymmetric splits when players have different risk attitudes. IRM generates a set of bargaining outcomes that balance each player’s maximum regret, producing splits that are more consistent with the empirical distribution of offers and counter‑offers.
-
Bertrand Competition – Under perfect competition, Nash predicts price equal to marginal cost, which is rarely observed in real markets. IRM, by deleting price‑under‑cutting strategies that would generate large regret if rivals do not follow, leaves a band of higher prices that are stable under the regret‑minimization criterion, thereby offering a theoretical justification for observed price rigidity.
Beyond these case studies, the authors discuss the relationship between IRM and bounded rationality. Psychological literature suggests that humans employ “regret aversion” as a heuristic: they prefer actions that protect them from the worst‑case disappointment rather than those that maximize expected gains. IRM captures this heuristic in a precise, game‑theoretic framework, positioning it as a bridge between normative models and descriptive findings.
The paper concludes by highlighting several avenues for future research. One direction is to extend IRM to dynamic games with imperfect information, where beliefs about future moves must be incorporated into regret calculations. Another is to develop algorithmic tools for computing IRM in large‑scale games, possibly leveraging convex optimization for continuous strategy spaces. Finally, the authors call for systematic experimental work that directly tests the predictive accuracy of IRM against alternative concepts such as quantal response equilibrium or level‑k reasoning.
In sum, iterated regret minimization offers a compelling alternative to Nash equilibrium, preserving much of the analytical elegance of classical game theory while delivering predictions that align far more closely with how real people actually play strategic games.