|国家预印本平台
| 注册
首页|KL-Regularised Q-Learning: A Token-level Action-Value perspective on Online RLHF

KL-Regularised Q-Learning: A Token-level Action-Value perspective on Online RLHF

KL-Regularised Q-Learning: A Token-level Action-Value perspective on Online RLHF

来源:Arxiv_logoArxiv
英文摘要

Proximal Policy Optimisation (PPO) is an established and effective policy gradient algorithm used for Language Model Reinforcement Learning from Human Feedback (LM-RLHF). PPO performs well empirically but has a heuristic motivation and handles the KL-divergence constraint used in LM-RLHF in an ad-hoc manner. In this paper, we develop a a new action-value RL method for the LM-RLHF setting, KL-regularised Q-Learning (KLQ). We then show that our method is equivalent to a version of PPO in a certain specific sense, despite its very different motivation. Finally, we benchmark KLQ on two key language generation tasks -- summarisation and single-turn dialogue. We demonstrate that KLQ performs on-par with PPO at optimising the LM-RLHF objective, and achieves a consistently higher win-rate against PPO on LLM-as-a-judge evaluations.

Jason R Brown、Lennie Wells、Edward James Young、Sergio Bacallado

计算技术、计算机技术

Jason R Brown,Lennie Wells,Edward James Young,Sergio Bacallado.KL-Regularised Q-Learning: A Token-level Action-Value perspective on Online RLHF[EB/OL].(2025-08-23)[2025-09-05].https://arxiv.org/abs/2508.17000.点此复制

评论