|国家预印本平台
首页|Visual hallucination detection in large vision-language models via evidential conflict

Visual hallucination detection in large vision-language models via evidential conflict

Visual hallucination detection in large vision-language models via evidential conflict

来源:Arxiv_logoArxiv
英文摘要

Despite the remarkable multimodal capabilities of Large Vision-Language Models (LVLMs), discrepancies often occur between visual inputs and textual outputs--a phenomenon we term visual hallucination. This critical reliability gap poses substantial risks in safety-critical Artificial Intelligence (AI) applications, necessitating a comprehensive evaluation benchmark and effective detection methods. Firstly, we observe that existing visual-centric hallucination benchmarks mainly assess LVLMs from a perception perspective, overlooking hallucinations arising from advanced reasoning capabilities. We develop the Perception-Reasoning Evaluation Hallucination (PRE-HAL) dataset, which enables the systematic evaluation of both perception and reasoning capabilities of LVLMs across multiple visual semantics, such as instances, scenes, and relations. Comprehensive evaluation with this new benchmark exposed more visual vulnerabilities, particularly in the more challenging task of relation reasoning. To address this issue, we propose, to the best of our knowledge, the first Dempster-Shafer theory (DST)-based visual hallucination detection method for LVLMs through uncertainty estimation. This method aims to efficiently capture the degree of conflict in high-level features at the model inference phase. Specifically, our approach employs simple mass functions to mitigate the computational complexity of evidence combination on power sets. We conduct an extensive evaluation of state-of-the-art LVLMs, LLaVA-v1.5, mPLUG-Owl2 and mPLUG-Owl3, with the new PRE-HAL benchmark. Experimental results indicate that our method outperforms five baseline uncertainty metrics, achieving average AUROC improvements of 4%, 10%, and 7% across three LVLMs. Our code is available at https://github.com/HT86159/Evidential-Conflict.

Zhekun Liu、Rui Wang、Yang Zhang、Liping Jing、Tao Huang

10.1016/j.ijar.2025.109507

计算技术、计算机技术

Zhekun Liu,Rui Wang,Yang Zhang,Liping Jing,Tao Huang.Visual hallucination detection in large vision-language models via evidential conflict[EB/OL].(2025-06-24)[2025-07-09].https://arxiv.org/abs/2506.19513.点此复制

评论