|国家预印本平台
首页|Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration

Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration

Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration

来源:Arxiv_logoArxiv
英文摘要

Large vision-language models (LVLMs) achieve impressive performance on multimodal tasks but often suffer from hallucination, and confidently describe objects or attributes not present in the image. Current inference-time interventions, while training-free, struggle to maintain accuracy in open-ended and long-form generation scenarios. We introduce the Confidence-Aware Attention Calibration (CAAC) framework to address this challenge by targeting two key biases: spatial perception bias, which distributes attention disproportionately across image tokens, and modality bias, which shifts focus from visual to textual inputs over time. CAAC employs a two-step approach: Visual-Token Calibration (VTC) to balance attention across visual tokens, and Adaptive Attention Re-Scaling (AAR) to reinforce visual grounding based on the model's confidence. This confidence-driven adjustment ensures consistent visual alignment during generation. Experiments on CHAIR, AMBER, and POPE benchmarks demonstrate that CAAC outperforms baselines, particularly in long-form generations, effectively reducing hallucination.

Mehrdad Fazli、Bowen Wei、Ziwei Zhu

计算技术、计算机技术

Mehrdad Fazli,Bowen Wei,Ziwei Zhu.Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration[EB/OL].(2025-05-27)[2025-07-16].https://arxiv.org/abs/2505.21472.点此复制

评论