|国家预印本平台
首页|PPO in the Fisher-Rao geometry

PPO in the Fisher-Rao geometry

PPO in the Fisher-Rao geometry

来源:Arxiv_logoArxiv
英文摘要

Proximal Policy Optimization (PPO) has become a widely adopted algorithm for reinforcement learning, offering a practical policy gradient method with strong empirical performance. Despite its popularity, PPO lacks formal theoretical guarantees for policy improvement and convergence. PPO is motivated by Trust Region Policy Optimization (TRPO) that utilizes a surrogate loss with a KL divergence penalty, which arises from linearizing the value function within a flat geometric space. In this paper, we derive a tighter surrogate in the Fisher-Rao (FR) geometry, yielding a novel variant, Fisher-Rao PPO (FR-PPO). Our proposed scheme provides strong theoretical guarantees, including monotonic policy improvement. Furthermore, in the tabular setting, we demonstrate that FR-PPO achieves sub-linear convergence without any dependence on the dimensionality of the action or state spaces, marking a significant step toward establishing formal convergence results for PPO-based algorithms.

Razvan-Andrei Lascu、David ?i?ka、?ukasz Szpruch

计算技术、计算机技术

Razvan-Andrei Lascu,David ?i?ka,?ukasz Szpruch.PPO in the Fisher-Rao geometry[EB/OL].(2025-06-04)[2025-07-18].https://arxiv.org/abs/2506.03757.点此复制

评论