|国家预印本平台
首页|Modality-Balancing Preference Optimization of Large Multimodal Models by Adversarial Negative Mining

Modality-Balancing Preference Optimization of Large Multimodal Models by Adversarial Negative Mining

Modality-Balancing Preference Optimization of Large Multimodal Models by Adversarial Negative Mining

来源:Arxiv_logoArxiv
英文摘要

The task adaptation and alignment of Large Multimodal Models (LMMs) have been significantly advanced by instruction tuning and further strengthened by recent preference optimization. Yet, most LMMs still suffer from severe modality imbalance during reasoning, i.e., outweighing language prior biases over visual inputs, which bottlenecks their generalization to downstream tasks and causes hallucinations. However, existing preference optimization approaches for LMMs do not focus on restraining the internal biases of their Large Language Model (LLM) backbones when curating the training data. Moreover, they heavily rely on offline data and lack the capacity to explore diverse responses adaptive to dynamic distributional shifts during training. Meanwhile, Group Relative Policy Optimization (GRPO), a recent method using online-generated data and verified rewards to improve reasoning capabilities, remains largely underexplored in LMM alignment. In this paper, we propose a novel preference learning framework, Modality-Balancing Preference Optimization (MBPO), to address the modality imbalance in LMMs. MBPO constructs a more effective offline preference dataset by generating hard negatives, i.e., rejected responses misled by LLM biases due to limited usage of visual information, through adversarial perturbation of input images. Moreover, MBPO leverages the easy-to-verify nature of close-ended tasks to generate online responses with verified rewards. GRPO is then employed to train the model with offline-online hybrid data. Extensive experiments demonstrate that MBPO can enhance LMM performance on challenging vision-language tasks and effectively reduce hallucinations.

Chenxi Liu、Tianyi Xiong、Ruibo Chen、Yihan Wu、Junfeng Guo、Tianyi Zhou、Heng Huang

计算技术、计算机技术

Chenxi Liu,Tianyi Xiong,Ruibo Chen,Yihan Wu,Junfeng Guo,Tianyi Zhou,Heng Huang.Modality-Balancing Preference Optimization of Large Multimodal Models by Adversarial Negative Mining[EB/OL].(2025-05-19)[2025-07-01].https://arxiv.org/abs/2506.08022.点此复制

评论