Evolutionary Policy Optimization
Evolutionary Policy Optimization
On-policy reinforcement learning (RL) algorithms are widely used for their strong asymptotic performance and training stability, but they struggle to scale with larger batch sizes, as additional parallel environments yield redundant data due to limited policy-induced diversity. In contrast, Evolutionary Algorithms (EAs) scale naturally and encourage exploration via randomized population-based search, but are often sample-inefficient. We propose Evolutionary Policy Optimization (EPO), a hybrid algorithm that combines the scalability and diversity of EAs with the performance and stability of policy gradients. EPO maintains a population of agents conditioned on latent variables, shares actor-critic network parameters for coherence and memory efficiency, and aggregates diverse experiences into a master agent. Across tasks in dexterous manipulation, legged locomotion, and classic control, EPO outperforms state-of-the-art baselines in sample efficiency, asymptotic performance, and scalability.
Jianren Wang、Yifan Su、Abhinav Gupta、Deepak Pathak
自动化基础理论计算技术、计算机技术
Jianren Wang,Yifan Su,Abhinav Gupta,Deepak Pathak.Evolutionary Policy Optimization[EB/OL].(2025-03-24)[2025-08-02].https://arxiv.org/abs/2503.19037.点此复制
评论