|国家预印本平台
首页|Finite-Sample Convergence Bounds for Trust Region Policy Optimization in Mean-Field Games

Finite-Sample Convergence Bounds for Trust Region Policy Optimization in Mean-Field Games

Finite-Sample Convergence Bounds for Trust Region Policy Optimization in Mean-Field Games

来源:Arxiv_logoArxiv
英文摘要

We introduce Mean-Field Trust Region Policy Optimization (MF-TRPO), a novel algorithm designed to compute approximate Nash equilibria for ergodic Mean-Field Games (MFG) in finite state-action spaces. Building on the well-established performance of TRPO in the reinforcement learning (RL) setting, we extend its methodology to the MFG framework, leveraging its stability and robustness in policy optimization. Under standard assumptions in the MFG literature, we provide a rigorous analysis of MF-TRPO, establishing theoretical guarantees on its convergence. Our results cover both the exact formulation of the algorithm and its sample-based counterpart, where we derive high-probability guarantees and finite sample complexity. This work advances MFG optimization by bridging RL techniques with mean-field decision-making, offering a theoretically grounded approach to solving complex multi-agent problems.

Antonio Ocello、Daniil Tiapkin、Lorenzo Mancini、Mathieu Laurière、Eric Moulines

计算技术、计算机技术

Antonio Ocello,Daniil Tiapkin,Lorenzo Mancini,Mathieu Laurière,Eric Moulines.Finite-Sample Convergence Bounds for Trust Region Policy Optimization in Mean-Field Games[EB/OL].(2025-05-28)[2025-06-08].https://arxiv.org/abs/2505.22781.点此复制

评论