|国家预印本平台
首页|Your other Left! Vision-Language Models Fail to Identify Relative Positions in Medical Images

Your other Left! Vision-Language Models Fail to Identify Relative Positions in Medical Images

Your other Left! Vision-Language Models Fail to Identify Relative Positions in Medical Images

来源:Arxiv_logoArxiv
英文摘要

Clinical decision-making relies heavily on understanding relative positions of anatomical structures and anomalies. Therefore, for Vision-Language Models (VLMs) to be applicable in clinical practice, the ability to accurately determine relative positions on medical images is a fundamental prerequisite. Despite its importance, this capability remains highly underexplored. To address this gap, we evaluate the ability of state-of-the-art VLMs, GPT-4o, Llama3.2, Pixtral, and JanusPro, and find that all models fail at this fundamental task. Inspired by successful approaches in computer vision, we investigate whether visual prompts, such as alphanumeric or colored markers placed on anatomical structures, can enhance performance. While these markers provide moderate improvements, results remain significantly lower on medical images compared to observations made on natural images. Our evaluations suggest that, in medical imaging, VLMs rely more on prior anatomical knowledge than on actual image content for answering relative position questions, often leading to incorrect conclusions. To facilitate further research in this area, we introduce the MIRP , Medical Imaging Relative Positioning, benchmark dataset, designed to systematically evaluate the capability to identify relative positions in medical images.

Daniel Wolf、Heiko Hillenhagen、Billurvan Taskin、Alex Bäuerle、Meinrad Beer、Michael Götz、Timo Ropinski

临床医学医学研究方法

Daniel Wolf,Heiko Hillenhagen,Billurvan Taskin,Alex Bäuerle,Meinrad Beer,Michael Götz,Timo Ropinski.Your other Left! Vision-Language Models Fail to Identify Relative Positions in Medical Images[EB/OL].(2025-08-01)[2025-08-11].https://arxiv.org/abs/2508.00549.点此复制

评论