Mean field limit of a continuous time finite state game

Mean field limit of a continuous time finite state game
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Mean field games is a recent area of study introduced by Lions and Lasry in a series of seminal papers in 2006. Mean field games model situations of competition between large number of rational agents that play non-cooperative dynamic games under certain symmetry assumptions. They key step is to develop a mean field model, in a similar way that what is done in statistical physics in order to construct a mathematically tractable model. A main question that arises in the study of such mean field problems is the rigorous justification of the mean field models by a limiting procedure. In this paper we consider the mean field limit of two-state Markov decision problem as the number of players $N\to \infty$. First we establish the existence and uniqueness of a symmetric partial information Markov perfect equilibrium. Then we derive a mean field model and characterize its main properties. This mean field limit is a system of coupled ordinary differential equations with initial-terminal data. Our main result is the convergence as $N\to \infty$ of the $N$ player game to the mean field model and an estimate of the rate of convergence.


💡 Research Summary

The paper investigates a continuous‑time, finite‑state (two‑state) dynamic game played by a large population of rational agents and establishes a rigorous mean‑field limit as the number of players (N) tends to infinity. The authors begin by formulating the (N)‑player game: each agent occupies one of two states (conventionally labeled 0 and 1) and can control the transition rates between these states through a control variable (u_i(t)) drawn from a finite set. The transition intensity depends both on the agent’s own control and on the empirical distribution of states across the whole population, denoted (\mu^N(t)=\frac{1}{N}\sum_{j=1}^N\delta_{X_j(t)}). Agents observe only their own state and the aggregate distribution (partial information) and aim to minimise a cost functional consisting of a running cost (L(X_i(t),u_i(t),\mu^N(t))) and a terminal cost (g(X_i(T),\mu^N(T))).

Symmetric Partial‑Information Markov Perfect Equilibrium (SPMPE).
Under the symmetry assumption (all agents use the same stationary strategy) and the partial‑information structure, the authors define a Markov perfect equilibrium in which each player’s optimal policy depends only on the current state and the population distribution. By applying dynamic programming and solving the associated Hamilton‑Jacobi‑Bellman (HJB) equations, they prove existence and uniqueness of the SPMPE. The key observation is that, because the state space is binary, the HJB system reduces to a pair of linear equations, allowing an explicit expression for the optimal feedback control (u^*(x,\mu)).

Mean‑field limit and the limiting model.
When (N\to\infty), the empirical distribution (\mu^N(t)) converges (in probability) to a deterministic measure (\mu(t)), often called the mean field. The limiting dynamics are captured by a coupled system of ordinary differential equations (ODEs):

  1. State‑probability ODE: Let (p(t)=\mathbb{P}(X(t)=1)). The evolution of (p) follows
    \

Comments & Academic Discussion

Loading comments...

Leave a Comment