Towards Reward Fairness in RLHF: From a Resource Allocation Perspective
Towards Reward Fairness in RLHF: From a Resource Allocation Perspective
Rewards serve as proxies for human preferences and play a crucial role in Reinforcement Learning from Human Feedback (RLHF). However, if these rewards are inherently imperfect, exhibiting various biases, they can adversely affect the alignment of large language models (LLMs). In this paper, we collectively define the various biases present in rewards as the problem of reward unfairness. We propose a bias-agnostic method to address the issue of reward fairness from a resource allocation perspective, without specifically designing for each type of bias, yet effectively mitigating them. Specifically, we model preference learning as a resource allocation problem, treating rewards as resources to be allocated while considering the trade-off between utility and fairness in their distribution. We propose two methods, Fairness Regularization and Fairness Coefficient, to achieve fairness in rewards. We apply our methods in both verification and reinforcement learning scenarios to obtain a fairness reward model and a policy model, respectively. Experiments conducted in these scenarios demonstrate that our approach aligns LLMs with human preferences in a more fair manner.
Sheng Ouyang、Yulan Hu、Ge Chen、Qingyang Li、Fuzheng Zhang、Yong Liu
计算技术、计算机技术
Sheng Ouyang,Yulan Hu,Ge Chen,Qingyang Li,Fuzheng Zhang,Yong Liu.Towards Reward Fairness in RLHF: From a Resource Allocation Perspective[EB/OL].(2025-05-29)[2025-06-16].https://arxiv.org/abs/2505.23349.点此复制
评论