|国家预印本平台
首页|The Impact of Preference Agreement in Reinforcement Learning from Human Feedback: A Case Study in Summarization

The Impact of Preference Agreement in Reinforcement Learning from Human Feedback: A Case Study in Summarization

The Impact of Preference Agreement in Reinforcement Learning from Human Feedback: A Case Study in Summarization

来源:Arxiv_logoArxiv
英文摘要

Reinforcement Learning from Human Feedback (RLHF) can be used to capture complex and nuanced properties of text generation quality. As a result, the task of text summarization has been identified as a good candidate for this process. In this paper, we explore how preference agreement impacts the efficacy of RLHF for summarization. We show that sampling human preferences to include a range of annotator agreement results in (1) higher accuracy reward models and (2) alters the characteristics of quality captured. We additionally show improvements in downstream generation when using a reward model trained with a range of preference agreements. Our contributions have implications for the design of synthetic datasets as well as the importance of considering quality differentials in comparison-based data.

Sian Gooding、Hassan Mansoor

计算技术、计算机技术

Sian Gooding,Hassan Mansoor.The Impact of Preference Agreement in Reinforcement Learning from Human Feedback: A Case Study in Summarization[EB/OL].(2023-11-02)[2025-07-23].https://arxiv.org/abs/2311.04919.点此复制

评论