|国家预印本平台
首页|Not All Preferences are What You Need for Post-Training: Selective Alignment Strategy for Preference Optimization

Not All Preferences are What You Need for Post-Training: Selective Alignment Strategy for Preference Optimization

Not All Preferences are What You Need for Post-Training: Selective Alignment Strategy for Preference Optimization

来源:Arxiv_logoArxiv
英文摘要

Post-training alignment of large language models (LLMs) is a critical challenge, as not all tokens contribute equally to model performance. This paper introduces a selective alignment strategy that prioritizes high-impact tokens within preference pairs, leveraging token-level log-probability differences between the current policy and a reference model. By focusing on these informative tokens, our approach reduces computational overhead and enhances alignment fidelity. We further explore the role of reference model quality, demonstrating that stronger reference models significantly improve token selection accuracy and overall optimization effectiveness. Comprehensive experiments on benchmarks such as Arena-Hard and MT-Bench validate the superiority of our Selective-DPO method over standard DPO and distillation-based baselines. Our findings highlight the importance of token-level optimization and reference model selection in advancing preference alignment for LLMs. The code is available at https://github.com/Dongzhijin/SDPO.

Zhijin Dong

计算技术、计算机技术

Zhijin Dong.Not All Preferences are What You Need for Post-Training: Selective Alignment Strategy for Preference Optimization[EB/OL].(2025-07-10)[2025-07-23].https://arxiv.org/abs/2507.07725.点此复制

评论