|国家预印本平台
首页|Accelerating Nash Learning from Human Feedback via Mirror Prox

Accelerating Nash Learning from Human Feedback via Mirror Prox

Accelerating Nash Learning from Human Feedback via Mirror Prox

来源:Arxiv_logoArxiv
英文摘要

Traditional Reinforcement Learning from Human Feedback (RLHF) often relies on reward models, frequently assuming preference structures like the Bradley-Terry model, which may not accurately capture the complexities of real human preferences (e.g., intransitivity). Nash Learning from Human Feedback (NLHF) offers a more direct alternative by framing the problem as finding a Nash equilibrium of a game defined by these preferences. In this work, we introduce Nash Mirror Prox ($\mathtt{Nash-MP}$), an online NLHF algorithm that leverages the Mirror Prox optimization scheme to achieve fast and stable convergence to the Nash equilibrium. Our theoretical analysis establishes that Nash-MP exhibits last-iterate linear convergence towards the $\beta$-regularized Nash equilibrium. Specifically, we prove that the KL-divergence to the optimal policy decreases at a rate of order $(1+2\beta)^{-N/2}$, where $N$ is a number of preference queries. We further demonstrate last-iterate linear convergence for the exploitability gap and uniformly for the span semi-norm of log-probabilities, with all these rates being independent of the size of the action space. Furthermore, we propose and analyze an approximate version of Nash-MP where proximal steps are estimated using stochastic policy gradients, making the algorithm closer to applications. Finally, we detail a practical implementation strategy for fine-tuning large language models and present experiments that demonstrate its competitive performance and compatibility with existing methods.

Daniil Tiapkin、Daniele Calandriello、Denis Belomestny、Eric Moulines、Alexey Naumov、Kashif Rasul、Michal Valko、Pierre Menard

计算技术、计算机技术

Daniil Tiapkin,Daniele Calandriello,Denis Belomestny,Eric Moulines,Alexey Naumov,Kashif Rasul,Michal Valko,Pierre Menard.Accelerating Nash Learning from Human Feedback via Mirror Prox[EB/OL].(2025-05-26)[2025-06-12].https://arxiv.org/abs/2505.19731.点此复制

评论