Direct Advantage Regression: Aligning LLMs with Online AI Reward
Direct Advantage Regression: Aligning LLMs with Online AI Reward
Online AI Feedback (OAIF) presents a promising alternative to Reinforcement Learning from Human Feedback (RLHF) by utilizing online AI preference in aligning language models (LLMs). However, the straightforward replacement of humans with AI deprives LLMs from learning more fine-grained AI supervision beyond binary signals. In this paper, we propose Direct Advantage Regression (DAR), a simple alignment algorithm using online AI reward to optimize policy improvement through weighted supervised fine-tuning. As an RL-free approach, DAR maintains theoretical consistency with online RLHF pipelines while significantly reducing implementation complexity and improving learning efficiency. Our empirical results underscore that AI reward is a better form of AI supervision consistently achieving higher human-AI agreement as opposed to AI preference. Additionally, evaluations using GPT-4-Turbo and MT-bench show that DAR outperforms both OAIF and online RLHF baselines.
Li He、He Zhao、Stephen Wan、Dadong Wang、Lina Yao、Tongliang Liu
计算技术、计算机技术
Li He,He Zhao,Stephen Wan,Dadong Wang,Lina Yao,Tongliang Liu.Direct Advantage Regression: Aligning LLMs with Online AI Reward[EB/OL].(2025-04-19)[2025-06-25].https://arxiv.org/abs/2504.14177.点此复制
评论