|国家预印本平台
首页|Mitigating Hallucinations via Inter-Layer Consistency Aggregation in Large Vision-Language Models

Mitigating Hallucinations via Inter-Layer Consistency Aggregation in Large Vision-Language Models

Mitigating Hallucinations via Inter-Layer Consistency Aggregation in Large Vision-Language Models

来源:Arxiv_logoArxiv
英文摘要

Despite the impressive capabilities of Large Vision-Language Models (LVLMs), they remain susceptible to hallucinations-generating content that is inconsistent with the input image. Existing training-free hallucination mitigation methods often suffer from unstable performance and high sensitivity to hyperparameter settings, limiting their practicality and broader adoption. In this paper, we propose a novel decoding mechanism, Decoding with Inter-layer Consistency via Layer Aggregation (DCLA), which requires no retraining, fine-tuning, or access to external knowledge bases. Specifically, our approach constructs a dynamic semantic reference by aggregating representations from previous layers, and corrects semantically deviated layers to enforce inter-layer consistency. The method allows DCLA to robustly mitigate hallucinations across multiple LVLMs. Experiments on hallucination benchmarks such as MME and POPE demonstrate that DCLA effectively reduces hallucinations while enhancing the reliability and performance of LVLMs.

Kai Tang、Jinhao You、Xiuqi Ge、Hanze Li、Yichen Guo、Xiande Huang

计算技术、计算机技术

Kai Tang,Jinhao You,Xiuqi Ge,Hanze Li,Yichen Guo,Xiande Huang.Mitigating Hallucinations via Inter-Layer Consistency Aggregation in Large Vision-Language Models[EB/OL].(2025-05-18)[2025-06-14].https://arxiv.org/abs/2505.12343.点此复制

评论