|国家预印本平台
首页|State Entropy Regularization for Robust Reinforcement Learning

State Entropy Regularization for Robust Reinforcement Learning

State Entropy Regularization for Robust Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

State entropy regularization has empirically shown better exploration and sample complexity in reinforcement learning (RL). However, its theoretical guarantees have not been studied. In this paper, we show that state entropy regularization improves robustness to structured and spatially correlated perturbations. These types of variation are common in transfer learning but often overlooked by standard robust RL methods, which typically focus on small, uncorrelated changes. We provide a comprehensive characterization of these robustness properties, including formal guarantees under reward and transition uncertainty, as well as settings where the method performs poorly. Much of our analysis contrasts state entropy with the widely used policy entropy regularization, highlighting their different benefits. Finally, from a practical standpoint, we illustrate that compared with policy entropy, the robustness advantages of state entropy are more sensitive to the number of rollouts used for policy evaluation.

Uri Koren、Yonatan Ashlag、Mirco Mutti、Esther Derman、Pierre-Luc Bacon、Shie Mannor

计算技术、计算机技术

Uri Koren,Yonatan Ashlag,Mirco Mutti,Esther Derman,Pierre-Luc Bacon,Shie Mannor.State Entropy Regularization for Robust Reinforcement Learning[EB/OL].(2025-06-08)[2025-07-01].https://arxiv.org/abs/2506.07085.点此复制

评论