CueLearner: Bootstrapping and local policy adaptation from relative feedback
CueLearner: Bootstrapping and local policy adaptation from relative feedback
Human guidance has emerged as a powerful tool for enhancing reinforcement learning (RL). However, conventional forms of guidance such as demonstrations or binary scalar feedback can be challenging to collect or have low information content, motivating the exploration of other forms of human input. Among these, relative feedback (i.e., feedback on how to improve an action, such as "more to the left") offers a good balance between usability and information richness. Previous research has shown that relative feedback can be used to enhance policy search methods. However, these efforts have been limited to specific policy classes and use feedback inefficiently. In this work, we introduce a novel method to learn from relative feedback and combine it with off-policy reinforcement learning. Through evaluations on two sparse-reward tasks, we demonstrate our method can be used to improve the sample efficiency of reinforcement learning by guiding its exploration process. Additionally, we show it can adapt a policy to changes in the environment or the user's preferences. Finally, we demonstrate real-world applicability by employing our approach to learn a navigation policy in a sparse reward setting.
Giulio Schiavi、Andrei Cramariuc、Lionel Ott、Roland Siegwart
计算技术、计算机技术
Giulio Schiavi,Andrei Cramariuc,Lionel Ott,Roland Siegwart.CueLearner: Bootstrapping and local policy adaptation from relative feedback[EB/OL].(2025-07-07)[2025-07-23].https://arxiv.org/abs/2507.04730.点此复制
评论