|国家预印本平台
首页|StaQ it! Growing neural networks for Policy Mirror Descent

StaQ it! Growing neural networks for Policy Mirror Descent

StaQ it! Growing neural networks for Policy Mirror Descent

来源:Arxiv_logoArxiv
英文摘要

In Reinforcement Learning (RL), regularization has emerged as a popular tool both in theory and practice, typically based either on an entropy bonus or a Kullback-Leibler divergence that constrains successive policies. In practice, these approaches have been shown to improve exploration, robustness and stability, giving rise to popular Deep RL algorithms such as SAC and TRPO. Policy Mirror Descent (PMD) is a theoretical framework that solves this general regularized policy optimization problem, however the closed-form solution involves the sum of all past Q-functions, which is intractable in practice. We propose and analyze PMD-like algorithms that only keep the last $M$ Q-functions in memory, and show that for finite and large enough $M$, a convergent algorithm can be derived, introducing no error in the policy update, unlike prior deep RL PMD implementations. StaQ, the resulting algorithm, enjoys strong theoretical guarantees and is competitive with deep RL baselines, while exhibiting less performance oscillation, paving the way for fully stable deep RL algorithms and providing a testbed for experimentation with Policy Mirror Descent.

Alena Shilova、Alex Davey、Brahim Driss、Riad Akrour

计算技术、计算机技术

Alena Shilova,Alex Davey,Brahim Driss,Riad Akrour.StaQ it! Growing neural networks for Policy Mirror Descent[EB/OL].(2025-06-16)[2025-07-16].https://arxiv.org/abs/2506.13862.点此复制

评论