|国家预印本平台
首页|ASPO: Adaptive Sentence-Level Preference Optimization for Fine-Grained Multimodal Reasoning

ASPO: Adaptive Sentence-Level Preference Optimization for Fine-Grained Multimodal Reasoning

ASPO: Adaptive Sentence-Level Preference Optimization for Fine-Grained Multimodal Reasoning

来源:Arxiv_logoArxiv
英文摘要

Direct Preference Optimization (DPO) has gained significant attention for its simplicity and computational efficiency in aligning large language models (LLMs). Recent advancements have extended DPO to multimodal scenarios, achieving strong performance. However, traditional DPO relies on binary preference optimization, rewarding or penalizing entire responses without considering fine-grained segment correctness, leading to suboptimal solutions. The root of this issue lies in the absence of fine-grained supervision during the optimization process. To address this, we propose Adaptive Sentence-level Preference Optimization (ASPO), which evaluates individual sentences for more precise preference optimization. By dynamically calculating adaptive rewards at the sentence level based on model predictions, ASPO enhances response content assessment without additional models or parameters. This significantly improves the alignment of multimodal features. Extensive experiments show that ASPO substantially enhances the overall performance of multimodal models.

Yeyuan Wang、Dehong Gao、Rujiao Long、Lei Yi、Linbo Jin、Libin Yang、Xiaoyan Cai

计算技术、计算机技术

Yeyuan Wang,Dehong Gao,Rujiao Long,Lei Yi,Linbo Jin,Libin Yang,Xiaoyan Cai.ASPO: Adaptive Sentence-Level Preference Optimization for Fine-Grained Multimodal Reasoning[EB/OL].(2025-05-25)[2025-06-14].https://arxiv.org/abs/2505.19100.点此复制

评论