|国家预印本平台
首页|Contextual Online Uncertainty-Aware Preference Learning for Human Feedback

Contextual Online Uncertainty-Aware Preference Learning for Human Feedback

Contextual Online Uncertainty-Aware Preference Learning for Human Feedback

来源:Arxiv_logoArxiv
英文摘要

Reinforcement Learning from Human Feedback (RLHF) has become a pivotal paradigm in artificial intelligence to align large models with human preferences. In this paper, we propose a novel statistical framework to simultaneously conduct the online decision-making and statistical inference on the optimal model using human preference data based on dynamic contextual information. Our approach introduces an efficient decision strategy that achieves both the optimal regret bound and the asymptotic distribution of the estimators. A key challenge in RLHF is handling the dependent online human preference outcomes with dynamic contexts. To address this, in the methodological aspect, we propose a two-stage algorithm starting with $\epsilon$-greedy followed by exploitations; in the theoretical aspect, we tailor anti-concentration inequalities and matrix martingale concentration techniques to derive the uniform estimation rate and asymptotic normality of the estimators using dependent samples from both stages. Extensive simulation results demonstrate that our method outperforms state-of-the-art strategies. We apply the proposed framework to analyze the human preference data for ranking large language models on the Massive Multitask Language Understanding dataset, yielding insightful results on the performance of different large language models for medical anatomy knowledge.

Nan Lu、Ethan X. Fang、Junwei Lu

计算技术、计算机技术

Nan Lu,Ethan X. Fang,Junwei Lu.Contextual Online Uncertainty-Aware Preference Learning for Human Feedback[EB/OL].(2025-04-27)[2025-05-06].https://arxiv.org/abs/2504.19342.点此复制

评论