Learning a Pessimistic Reward Model in RLHF
Learning a Pessimistic Reward Model in RLHF
This work proposes `PET', a novel pessimistic reward fine-tuning method, to learn a pessimistic reward model robust against reward hacking in offline reinforcement learning from human feedback (RLHF). Traditional reward modeling techniques in RLHF train an imperfect reward model, on which a KL regularization plays a pivotal role in mitigating reward hacking when optimizing a policy. Such an intuition-based method still suffers from reward hacking, and the policies with large KL divergence from the dataset distribution are excluded during learning. In contrast, we show that when optimizing a policy on a pessimistic reward model fine-tuned through PET, reward hacking can be prevented without relying on any regularization. We test our methods on the standard TL;DR summarization dataset. We find that one can learn a high-quality policy on our pessimistic reward without using any regularization. Such a policy has a high KL divergence from the dataset distribution while having high performance in practice. In summary, our work shows the feasibility of learning a pessimistic reward model against reward hacking. The agent can greedily search for the policy with a high pessimistic reward without suffering from reward hacking.
Yinglun Xu、Hangoo Kang、Tarun Suresh、Yuxuan Wan、Gagandeep Singh
计算技术、计算机技术
Yinglun Xu,Hangoo Kang,Tarun Suresh,Yuxuan Wan,Gagandeep Singh.Learning a Pessimistic Reward Model in RLHF[EB/OL].(2025-05-26)[2025-06-27].https://arxiv.org/abs/2505.20556.点此复制
评论