Deep Reinforcement Learning with Gradient Eligibility Traces
Deep Reinforcement Learning with Gradient Eligibility Traces
Achieving fast and stable off-policy learning in deep reinforcement learning (RL) is challenging. Most existing methods rely on semi-gradient temporal-difference (TD) methods for their simplicity and efficiency, but are consequently susceptible to divergence. While more principled approaches like Gradient TD (GTD) methods have strong convergence guarantees, they have rarely been used in deep RL. Recent work introduced the Generalized Projected Bellman Error ($\GPBE$), enabling GTD methods to work efficiently with nonlinear function approximation. However, this work is only limited to one-step methods, which are slow at credit assignment and require a large number of samples. In this paper, we extend the $\GPBE$ objective to support multistep credit assignment based on the $λ$-return and derive three gradient-based methods that optimize this new objective. We provide both a forward-view formulation compatible with experience replay and a backward-view formulation compatible with streaming algorithms. Finally, we evaluate the proposed algorithms and show that they outperform both PPO and StreamQ in MuJoCo and MinAtar environments, respectively. Code available at https://github.com/esraaelelimy/gtd\_algos
Esraa Elelimy、Brett Daley、Andrew Patterson、Marlos C. Machado、Adam White、Martha White
计算技术、计算机技术
Esraa Elelimy,Brett Daley,Andrew Patterson,Marlos C. Machado,Adam White,Martha White.Deep Reinforcement Learning with Gradient Eligibility Traces[EB/OL].(2025-07-12)[2025-07-25].https://arxiv.org/abs/2507.09087.点此复制
评论