Two Minds Better Than One: Collaborative Reward Modeling for LLM Alignment
Two Minds Better Than One: Collaborative Reward Modeling for LLM Alignment
Reward models (RMs) play a pivotal role in aligning large language models (LLMs) with human values. However, noisy preferences in human feedback can lead to reward misgeneralization - a phenomenon where reward models learn spurious correlations or overfit to noisy preferences, which poses important challenges to the generalization of RMs. This paper systematically analyzes the characteristics of preference pairs and aims to identify how noisy preferences differ from human-aligned preferences in reward modeling. Our analysis reveals that noisy preferences are difficult for RMs to fit, as they cause sharp training fluctuations and irregular gradient updates. These distinctive dynamics suggest the feasibility of identifying and excluding such noisy preferences. Empirical studies demonstrate that policy LLM optimized with a reward model trained on the full preference dataset, which includes substantial noise, performs worse than the one trained on a subset of exclusively high quality preferences. To address this challenge, we propose an online Collaborative Reward Modeling (CRM) framework to achieve robust preference learning through peer review and curriculum learning. In particular, CRM maintains two RMs that collaboratively filter potential noisy preferences by peer-reviewing each other's data selections. Curriculum learning synchronizes the capabilities of two models, mitigating excessive disparities to promote the utility of peer review. Extensive experiments demonstrate that CRM significantly enhances RM generalization, with up to 9.94 points improvement on RewardBench under an extreme 40\% noise. Moreover, CRM can seamlessly extend to implicit-reward alignment methods, offering a robust and versatile alignment strategy.
Mingxu Chai、Zizhuo Zhang、Jiazheng Zhang、Wenqing Jing、Zhiheng Xi、Shihan Dou、Rongxiang Weng、Jiahuan Li、Jingang Wang、Shibo Hong、Tao Gui、Qi Zhang
计算技术、计算机技术
Mingxu Chai,Zizhuo Zhang,Jiazheng Zhang,Wenqing Jing,Zhiheng Xi,Shihan Dou,Rongxiang Weng,Jiahuan Li,Jingang Wang,Shibo Hong,Tao Gui,Qi Zhang.Two Minds Better Than One: Collaborative Reward Modeling for LLM Alignment[EB/OL].(2025-05-15)[2025-07-02].https://arxiv.org/abs/2505.10597.点此复制
评论