|国家预印本平台
首页|ActiveDPO: Active Direct Preference Optimization for Sample-Efficient Alignment

ActiveDPO: Active Direct Preference Optimization for Sample-Efficient Alignment

ActiveDPO: Active Direct Preference Optimization for Sample-Efficient Alignment

来源:Arxiv_logoArxiv
英文摘要

The recent success of using human preferences to align large language models (LLMs) has significantly improved their performance in various downstream tasks like question answering, mathematical reasoning, and code generation. However,3 achieving effective LLM alignment depends on high-quality human preference datasets. Collecting these datasets requires human preference annotation, which is costly and resource-intensive, necessitating efficient active data selection methods. Existing methods either lack a strong theoretical foundation or depend on restrictive reward function assumptions (e.g., linearity). To this end, we propose an algorithm, ActiveDPO, that uses a theoretically grounded data selection criterion for non-linear reward functions while directly leveraging the LLM itself to parameterize the reward model that is used for active data selection. As a result, ActiveDPO explicitly accounts for the influence of LLM on data selection, unlike methods that select the data without considering the LLM that is being aligned, thereby leading to more effective and efficient data collection. Extensive experiments show that ActiveDPO outperforms existing methods across various models and datasets.

Xiaoqiang Lin、Arun Verma、Zhongxiang Dai、Daniela Rus、See-Kiong Ng、Bryan Kian Hsiang Low

计算技术、计算机技术

Xiaoqiang Lin,Arun Verma,Zhongxiang Dai,Daniela Rus,See-Kiong Ng,Bryan Kian Hsiang Low.ActiveDPO: Active Direct Preference Optimization for Sample-Efficient Alignment[EB/OL].(2025-05-25)[2025-06-22].https://arxiv.org/abs/2505.19241.点此复制

评论