|国家预印本平台
首页|Evolutionary Policy Optimization

Evolutionary Policy Optimization

Evolutionary Policy Optimization

来源:Arxiv_logoArxiv
英文摘要

A key challenge in reinforcement learning (RL) is managing the exploration-exploitation trade-off without sacrificing sample efficiency. Policy gradient (PG) methods excel in exploitation through fine-grained, gradient-based optimization but often struggle with exploration due to their focus on local search. In contrast, evolutionary computation (EC) methods excel in global exploration, but lack mechanisms for exploitation. To address these limitations, this paper proposes Evolutionary Policy Optimization (EPO), a hybrid algorithm that integrates neuroevolution with policy gradient methods for policy optimization. EPO leverages the exploration capabilities of EC and the exploitation strengths of PG, offering an efficient solution to the exploration-exploitation dilemma in RL. EPO is evaluated on the Atari Pong and Breakout benchmarks. Experimental results show that EPO improves both policy quality and sample efficiency compared to standard PG and EC methods, making it effective for tasks that require both exploration and local optimization.

Zelal Su "Lain" Mustafaoglu、Keshav Pingali、Risto Miikkulainen

计算技术、计算机技术

Zelal Su "Lain" Mustafaoglu,Keshav Pingali,Risto Miikkulainen.Evolutionary Policy Optimization[EB/OL].(2025-04-16)[2025-08-02].https://arxiv.org/abs/2504.12568.点此复制

评论