REBEL: Reward Regularization-Based Approach for Robotic Reinforcement Learning from Human Feedback
REBEL: Reward Regularization-Based Approach for Robotic Reinforcement Learning from Human Feedback
The effectiveness of reinforcement learning (RL) agents in continuous control robotics tasks is mainly dependent on the design of the underlying reward function, which is highly prone to reward hacking. A misalignment between the reward function and underlying human preferences (values, social norms) can lead to catastrophic outcomes in the real world especially in the context of robotics for critical decision making. Recent methods aim to mitigate misalignment by learning reward functions from human preferences and subsequently performing policy optimization. However, these methods inadvertently introduce a distribution shift during reward learning due to ignoring the dependence of agent-generated trajectories on the reward learning objective, ultimately resulting in sub-optimal alignment. Hence, in this work, we address this challenge by advocating for the adoption of regularized reward functions that more accurately mirror the intended behaviors of the agent. We propose a novel concept of reward regularization within the robotic RLHF (RL from Human Feedback) framework, which we refer to as \emph{agent preferences}. Our approach uniquely incorporates not just human feedback in the form of preferences but also considers the preferences of the RL agent itself during the reward function learning process. This dual consideration significantly mitigates the issue of distribution shift in RLHF with a computationally tractable algorithm. We provide a theoretical justification for the proposed algorithm by formulating the robotic RLHF problem as a bilevel optimization problem and developing a computationally tractable version of the same. We demonstrate the efficiency of our algorithm {\ours} in several continuous control benchmarks in DeepMind Control Suite \cite{tassa2018deepmind}.
Anukriti Singh、Dinesh Manocha、Souradip Chakraborty、Amisha Bhaskar、Pratap Tokekar、Amrit Singh Bedi
计算技术、计算机技术自动化技术、自动化技术设备
Anukriti Singh,Dinesh Manocha,Souradip Chakraborty,Amisha Bhaskar,Pratap Tokekar,Amrit Singh Bedi.REBEL: Reward Regularization-Based Approach for Robotic Reinforcement Learning from Human Feedback[EB/OL].(2023-12-21)[2025-08-06].https://arxiv.org/abs/2312.14436.点此复制
评论