|国家预印本平台
首页|CAI: Caption-Sensitive Attention Intervention for Mitigating Object Hallucination in Large Vision-Language Models

CAI: Caption-Sensitive Attention Intervention for Mitigating Object Hallucination in Large Vision-Language Models

CAI: Caption-Sensitive Attention Intervention for Mitigating Object Hallucination in Large Vision-Language Models

来源:Arxiv_logoArxiv
英文摘要

Although Large Vision-Language Models (LVLMs) have demonstrated powerful capabilities in interpreting visual information, they frequently produce content that deviates from visual information, leading to object hallucination. To tackle this, recent works mostly depend on expensive manual annotations and training cost, or significantly increase inference time. In this work, we observe that LVLMs' attention to visual information is significantly stronger when answering caption queries compared to non-caption queries. Inspired by this phenomenon, we propose Caption-sensitive Attention Intervention (CAI), a training-free, plug-and-play hallucination mitigation method that leverages the attention activation pattern in response to caption queries to enhance LVLMs' visual perception capability. Extensive experimental results across four benchmarks covering both discriminative and generative tasks, demonstrate that CAI achieves state-of-the-art (SOTA) hallucination mitigating performance only with minimal additional inference cost.

Qiming Li、Zekai Ye、Xiaocheng Feng、Weihong Zhong、Libo Qin、Ruihan Chen、Baohang Li、Kui Jiang、Yaowei Wang、Ting Liu、Bing Qin

计算技术、计算机技术

Qiming Li,Zekai Ye,Xiaocheng Feng,Weihong Zhong,Libo Qin,Ruihan Chen,Baohang Li,Kui Jiang,Yaowei Wang,Ting Liu,Bing Qin.CAI: Caption-Sensitive Attention Intervention for Mitigating Object Hallucination in Large Vision-Language Models[EB/OL].(2025-06-30)[2025-07-16].https://arxiv.org/abs/2506.23590.点此复制

评论