|国家预印本平台
首页|Vision-Language Models Can't See the Obvious

Vision-Language Models Can't See the Obvious

Vision-Language Models Can't See the Obvious

来源:Arxiv_logoArxiv
英文摘要

We present Saliency Benchmark (SalBench), a novel benchmark designed to assess the capability of Large Vision-Language Models (LVLM) in detecting visually salient features that are readily apparent to humans, such as a large circle amidst a grid of smaller ones. This benchmark focuses on low-level features including color, intensity, and orientation, which are fundamental to human visual processing. Our SalBench consists of images that highlight rare, unusual, or unexpected elements within scenes, and naturally draw human attention. It comprises three novel tasks for evaluating the perceptual capabilities of LVLM: Odd-One-Out Detection, Referring Odd-One-Out, and Visual Referring Odd-One-Out. We perform a comprehensive evaluation of state-of-the-art LVLM using SalBench and our findings reveal a surprising limitation: LVLM struggle to identify seemingly obvious visual anomalies, with even the advanced GPT-4o achieving only 47.6\% accuracy on such a simple task. SalBench will be an important step in measuring the capabilities of LVLM that align with the subtle definition of human attention.

Yasser Dahou、Ngoc Dung Huynh、Phuc H. Le-Khac、Wamiq Reyaz Para、Ankit Singh、Sanath Narayan

计算技术、计算机技术

Yasser Dahou,Ngoc Dung Huynh,Phuc H. Le-Khac,Wamiq Reyaz Para,Ankit Singh,Sanath Narayan.Vision-Language Models Can't See the Obvious[EB/OL].(2025-07-07)[2025-07-19].https://arxiv.org/abs/2507.04741.点此复制

评论