|国家预印本平台
首页|Visual Prompt Engineering for Vision Language Models in Radiology

Visual Prompt Engineering for Vision Language Models in Radiology

Visual Prompt Engineering for Vision Language Models in Radiology

来源:Arxiv_logoArxiv
英文摘要

Medical image classification plays a crucial role in clinical decision-making, yet most models are constrained to a fixed set of predefined classes, limiting their adaptability to new conditions. Contrastive Language-Image Pretraining (CLIP) offers a promising solution by enabling zero-shot classification through multimodal large-scale pretraining. However, while CLIP effectively captures global image content, radiology requires a more localized focus on specific pathology regions to enhance both interpretability and diagnostic accuracy. To address this, we explore the potential of incorporating visual cues into zero-shot classification, embedding visual markers, such as arrows, bounding boxes, and circles, directly into radiological images to guide model attention. Evaluating across four public chest X-ray datasets, we demonstrate that visual markers improve AUROC by up to 0.185, highlighting their effectiveness in enhancing classification performance. Furthermore, attention map analysis confirms that visual cues help models focus on clinically relevant areas, leading to more interpretable predictions.To support further research, we use public datasets and provide our codebase and preprocessing pipeline under https://github.com/MIC-DKFZ/VPE-in-Radiology, serving as a reference point for future work on localized classification in medical imaging.

Markus Bujotzek、Stefan Denner、David Zimmerer、Dimitrios Bounias、Raphael Stock、Klaus Maier-Hein

医学研究方法临床医学

Markus Bujotzek,Stefan Denner,David Zimmerer,Dimitrios Bounias,Raphael Stock,Klaus Maier-Hein.Visual Prompt Engineering for Vision Language Models in Radiology[EB/OL].(2025-06-22)[2025-07-16].https://arxiv.org/abs/2408.15802.点此复制

评论