|国家预印本平台
首页|MOSLIM:Align with diverse preferences in prompts through reward classification

MOSLIM:Align with diverse preferences in prompts through reward classification

MOSLIM:Align with diverse preferences in prompts through reward classification

来源:Arxiv_logoArxiv
英文摘要

The multi-objective alignment of Large Language Models (LLMs) is essential for ensuring foundational models conform to diverse human preferences. Current research in this field typically involves either multiple policies or multiple reward models customized for various preferences, or the need to train a preference-specific supervised fine-tuning (SFT) model. In this work, we introduce a novel multi-objective alignment method, MOSLIM, which utilizes a single reward model and policy model to address diverse objectives. MOSLIM provides a flexible way to control these objectives through prompting and does not require preference training during SFT phase, allowing thousands of off-the-shelf models to be directly utilized within this training framework. MOSLIM leverages a multi-head reward model that classifies question-answer pairs instead of scoring them and then optimize policy model with a scalar reward derived from a mapping function that converts classification results from reward model into reward scores. We demonstrate the efficacy of our proposed method across several multi-objective benchmarks and conduct ablation studies on various reward model sizes and policy optimization methods. The MOSLIM method outperforms current multi-objective approaches in most results while requiring significantly fewer GPU computing resources compared with existing policy optimization methods.

Yu Zhang、Wanli Jiang、Zhengyu Yang

计算技术、计算机技术

Yu Zhang,Wanli Jiang,Zhengyu Yang.MOSLIM:Align with diverse preferences in prompts through reward classification[EB/OL].(2025-05-24)[2025-06-22].https://arxiv.org/abs/2505.20336.点此复制

评论