MedM-VL: What Makes a Good Medical LVLM?
MedM-VL: What Makes a Good Medical LVLM?
Medical image analysis is essential in modern healthcare. Deep learning has redirected research focus toward complex medical multimodal tasks, including report generation and visual question answering. Traditional task-specific models often fall short in handling these challenges. Large vision-language models (LVLMs) offer new solutions for solving such tasks. In this study, we build on the popular LLaVA framework to systematically explore model architectures and training strategies for both 2D and 3D medical LVLMs. We present extensive empirical findings and practical guidance. To support reproducibility and future research, we release a modular codebase, MedM-VL, and two pre-trained models: MedM-VL-2D for 2D medical image analysis and MedM-VL-CT-Chest for 3D CT-based applications. The code and models are available at: https://github.com/MSIIP/MedM-VL
Yiming Shi、Shaoshuai Yang、Xun Zhu、Haoyu Wang、Miao Li、Ji Wu
医学研究方法医学现状、医学发展
Yiming Shi,Shaoshuai Yang,Xun Zhu,Haoyu Wang,Miao Li,Ji Wu.MedM-VL: What Makes a Good Medical LVLM?[EB/OL].(2025-04-05)[2025-05-10].https://arxiv.org/abs/2504.04323.点此复制
评论