|国家预印本平台
首页|COS-DPO: Conditioned One-Shot Multi-Objective Fine-Tuning Framework

COS-DPO: Conditioned One-Shot Multi-Objective Fine-Tuning Framework

COS-DPO: Conditioned One-Shot Multi-Objective Fine-Tuning Framework

来源:Arxiv_logoArxiv
英文摘要

In LLM alignment and many other ML applications, one often faces the Multi-Objective Fine-Tuning (MOFT) problem, i.e., fine-tuning an existing model with datasets labeled w.r.t. different objectives simultaneously. To address the challenge, we propose a Conditioned One-Shot fine-tuning framework (COS-DPO) that extends the Direct Preference Optimization technique, originally developed for efficient LLM alignment with preference data, to accommodate the MOFT settings. By direct conditioning on the weight across auxiliary objectives, our Weight-COS-DPO method enjoys an efficient one-shot training process for profiling the Pareto front and is capable of achieving comprehensive trade-off solutions even in the post-training stage. Based on our theoretical findings on the linear transformation properties of the loss function, we further propose the Temperature-COS-DPO method that augments the temperature parameter to the model input, enhancing the flexibility of post-training control over the trade-offs between the main and auxiliary objectives. We demonstrate the effectiveness and efficiency of the COS-DPO framework through its applications to various tasks, including the Learning-to-Rank (LTR) and LLM alignment tasks, highlighting its viability for large-scale ML deployments.

Holakou Rahmanian、Tesi Xiao、Michael Shavlovsky、Lexing Ying、Yinuo Ren

计算技术、计算机技术

Holakou Rahmanian,Tesi Xiao,Michael Shavlovsky,Lexing Ying,Yinuo Ren.COS-DPO: Conditioned One-Shot Multi-Objective Fine-Tuning Framework[EB/OL].(2025-06-20)[2025-07-02].https://arxiv.org/abs/2410.08316.点此复制

评论