Solving Continuous Mean Field Games: Deep Reinforcement Learning for Non-Stationary Dynamics
📝 Original Info
- Title: Solving Continuous Mean Field Games: Deep Reinforcement Learning for Non-Stationary Dynamics
- ArXiv ID: 2510.22158
- Date: 2025-10-25
- Authors: Authors information not provided in the source material.
📝 Abstract
Mean field games (MFGs) have emerged as a powerful framework for modeling interactions in large-scale multi-agent systems. Despite recent advancements in reinforcement learning (RL) for MFGs, existing methods are typically limited to finite spaces or stationary models, hindering their applicability to real-world problems. This paper introduces a novel deep reinforcement learning (DRL) algorithm specifically designed for non-stationary continuous MFGs. The proposed approach builds upon a Fictitious Play (FP) methodology, leveraging DRL for best-response computation and supervised learning for average policy representation. Furthermore, it learns a representation of the time-dependent population distribution using a Conditional Normalizing Flow. To validate the effectiveness of our method, we evaluate it on three different examples of increasing complexity. By addressing critical limitations in scalability and density approximation, this work represents a significant advancement in applying DRL techniques to complex MFG problems, bringing the field closer to real-world multi-agent systems.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.