|国家预印本平台
首页|Reparameterization Proximal Policy Optimization

Reparameterization Proximal Policy Optimization

Reparameterization Proximal Policy Optimization

来源:Arxiv_logoArxiv
英文摘要

Reparameterization policy gradient (RPG) is promising for improving sample efficiency by leveraging differentiable dynamics. However, a critical barrier is its training instability, where high-variance gradients can destabilize the learning process. To address this, we draw inspiration from Proximal Policy Optimization (PPO), which uses a surrogate objective to enable stable sample reuse in the model-free setting. We first establish a connection between this surrogate objective and RPG, which has been largely unexplored and is non-trivial. Then, we bridge this gap by demonstrating that the reparameterization gradient of a PPO-like surrogate objective can be computed efficiently using backpropagation through time. Based on this key insight, we propose Reparameterization Proximal Policy Optimization (RPO), a stable and sample-efficient RPG-based method. RPO enables multiple epochs of stable sample reuse by optimizing a clipped surrogate objective tailored for RPG, while being further stabilized by Kullback-Leibler (KL) divergence regularization and remaining fully compatible with existing variance reduction methods. We evaluate RPO on a suite of challenging locomotion and manipulation tasks, where experiments demonstrate that our method achieves superior sample efficiency and strong performance.

Hai Zhong、Xun Wang、Zhuoran Li、Longbo Huang

计算技术、计算机技术

Hai Zhong,Xun Wang,Zhuoran Li,Longbo Huang.Reparameterization Proximal Policy Optimization[EB/OL].(2025-08-08)[2025-08-24].https://arxiv.org/abs/2508.06214.点此复制

评论