|国家预印本平台
首页|Alignment as Distribution Learning: Your Preference Model is Explicitly a Language Model

Alignment as Distribution Learning: Your Preference Model is Explicitly a Language Model

Alignment as Distribution Learning: Your Preference Model is Explicitly a Language Model

来源:Arxiv_logoArxiv
英文摘要

Alignment via reinforcement learning from human feedback (RLHF) has become the dominant paradigm for controlling the quality of outputs from large language models (LLMs). However, when viewed as `loss + regularization,' the standard RLHF objective lacks theoretical justification and incentivizes degenerate, deterministic solutions, an issue that variants such as Direct Policy Optimization (DPO) also inherit. In this paper, we rethink alignment by framing it as \emph{distribution learning} from pairwise preference feedback by explicitly modeling how information about the target language model bleeds through the preference data. This explicit modeling leads us to propose three principled learning objectives: preference maximum likelihood estimation, preference distillation, and reverse KL minimization. We theoretically show that all three approaches enjoy strong non-asymptotic $O(1/n)$ convergence to the target language model, naturally avoiding degeneracy and reward overfitting. Finally, we empirically demonstrate that our distribution learning framework, especially preference distillation, consistently outperforms or matches the performances of RLHF and DPO across various tasks and models.

Jihun Yun、Juno Kim、Jongho Park、Junhyuck Kim、Jongha Jon Ryu、Jaewoong Cho、Kwang-Sung Jun

计算技术、计算机技术

Jihun Yun,Juno Kim,Jongho Park,Junhyuck Kim,Jongha Jon Ryu,Jaewoong Cho,Kwang-Sung Jun.Alignment as Distribution Learning: Your Preference Model is Explicitly a Language Model[EB/OL].(2025-06-02)[2025-07-09].https://arxiv.org/abs/2506.01523.点此复制

评论