Medical Large Vision Language Models with Multi-Image Visual Ability
Medical Large Vision Language Models with Multi-Image Visual Ability
Medical large vision-language models (LVLMs) have demonstrated promising performance across various single-image question answering (QA) benchmarks, yet their capability in processing multi-image clinical scenarios remains underexplored. Unlike single image based tasks, medical tasks involving multiple images often demand sophisticated visual understanding capabilities, such as temporal reasoning and cross-modal analysis, which are poorly supported by current medical LVLMs. To bridge this critical gap, we present the Med-MIM instruction dataset, comprising 83.2K medical multi-image QA pairs that span four types of multi-image visual abilities (temporal understanding, reasoning, comparison, co-reference). Using this dataset, we fine-tune Mantis and LLaVA-Med, resulting in two specialized medical VLMs: MIM-LLaVA-Med and Med-Mantis, both optimized for multi-image analysis. Additionally, we develop the Med-MIM benchmark to comprehensively evaluate the medical multi-image understanding capabilities of LVLMs. We assess eight popular LVLMs, including our two models, on the Med-MIM benchmark. Experimental results show that both Med-Mantis and MIM-LLaVA-Med achieve superior performance on the held-in and held-out subsets of the Med-MIM benchmark, demonstrating that the Med-MIM instruction dataset effectively enhances LVLMs' multi-image understanding capabilities in the medical domain.
Xikai Yang、Juzheng Miao、Yuchen Yuan、Jiaze Wang、Qi Dou、Jinpeng Li、Pheng-Ann Heng
医学现状、医学发展医学研究方法
Xikai Yang,Juzheng Miao,Yuchen Yuan,Jiaze Wang,Qi Dou,Jinpeng Li,Pheng-Ann Heng.Medical Large Vision Language Models with Multi-Image Visual Ability[EB/OL].(2025-05-25)[2025-07-16].https://arxiv.org/abs/2505.19031.点此复制
评论