|国家预印本平台
首页|AM-PPO: (Advantage) Alpha-Modulation with Proximal Policy Optimization

AM-PPO: (Advantage) Alpha-Modulation with Proximal Policy Optimization

AM-PPO: (Advantage) Alpha-Modulation with Proximal Policy Optimization

来源:Arxiv_logoArxiv
英文摘要

Proximal Policy Optimization (PPO) is a widely used reinforcement learning algorithm that heavily relies on accurate advantage estimates for stable and efficient training. However, raw advantage signals can exhibit significant variance, noise, and scale-related issues, impeding optimal learning performance. To address this challenge, we introduce Advantage Modulation PPO (AM-PPO), a novel enhancement of PPO that adaptively modulates advantage estimates using a dynamic, non-linear scaling mechanism. This adaptive modulation employs an alpha controller that dynamically adjusts the scaling factor based on evolving statistical properties of the advantage signals, such as their norm, variance, and a predefined target saturation level. By incorporating a tanh-based gating function driven by these adaptively scaled advantages, AM-PPO reshapes the advantage signals to stabilize gradient updates and improve the conditioning of the policy gradient landscape. Crucially, this modulation also influences value function training by providing consistent and adaptively conditioned learning targets. Empirical evaluations across standard continuous control benchmarks demonstrate that AM-PPO achieves superior reward trajectories, exhibits sustained learning progression, and significantly reduces the clipping required by adaptive optimizers. These findings underscore the potential of advantage modulation as a broadly applicable technique for enhancing reinforcement learning optimization.

Soham Sane

计算技术、计算机技术自动化基础理论

Soham Sane.AM-PPO: (Advantage) Alpha-Modulation with Proximal Policy Optimization[EB/OL].(2025-05-21)[2025-06-05].https://arxiv.org/abs/2505.15514.点此复制

评论