Population-aware Online Mirror Descent for Mean-Field Games by Deep Reinforcement Learning
Population-aware Online Mirror Descent for Mean-Field Games by Deep Reinforcement Learning
Mean Field Games (MFGs) have the ability to handle large-scale multi-agent systems, but learning Nash equilibria in MFGs remains a challenging task. In this paper, we propose a deep reinforcement learning (DRL) algorithm that achieves population-dependent Nash equilibrium without the need for averaging or sampling from history, inspired by Munchausen RL and Online Mirror Descent. Through the design of an additional inner-loop replay buffer, the agents can effectively learn to achieve Nash equilibrium from any distribution, mitigating catastrophic forgetting. The resulting policy can be applied to various initial distributions. Numerical experiments on four canonical examples demonstrate our algorithm has better convergence properties than SOTA algorithms, in particular a DRL version of Fictitious Play for population-dependent policies.
Matthieu Geist、Zida Wu、Mathieu Lauriere、Olivier Pietquin、Ankur Mehta、Samuel Jia Cong Chua
计算技术、计算机技术
Matthieu Geist,Zida Wu,Mathieu Lauriere,Olivier Pietquin,Ankur Mehta,Samuel Jia Cong Chua.Population-aware Online Mirror Descent for Mean-Field Games by Deep Reinforcement Learning[EB/OL].(2024-03-06)[2025-08-30].https://arxiv.org/abs/2403.03552.点此复制
评论