|国家预印本平台
首页|Reflective Verbal Reward Design for Pluralistic Alignment

Reflective Verbal Reward Design for Pluralistic Alignment

Reflective Verbal Reward Design for Pluralistic Alignment

来源:Arxiv_logoArxiv
英文摘要

AI agents are commonly aligned with "human values" through reinforcement learning from human feedback (RLHF), where a single reward model is learned from aggregated human feedback and used to align an agent's behavior. However, human values are not homogeneous--different people hold distinct and sometimes conflicting values. Aggregating feedback into a single reward model risks disproportionately suppressing minority preferences. To address this, we present a novel reward modeling approach for learning individualized reward models. Our approach uses a language model to guide users through reflective dialogues where they critique agent behavior and construct their preferences. This personalized dialogue history, containing the user's reflections and critiqued examples, is then used as context for another language model that serves as an individualized reward function (what we call a "verbal reward model") for evaluating new trajectories. In studies with 30 participants, our method achieved a 9-12% improvement in accuracy over non-reflective verbal reward models while being more sample efficient than traditional supervised learning methods.

Carter Blair、Kate Larson、Edith Law

计算技术、计算机技术

Carter Blair,Kate Larson,Edith Law.Reflective Verbal Reward Design for Pluralistic Alignment[EB/OL].(2025-06-21)[2025-07-16].https://arxiv.org/abs/2506.17834.点此复制

评论