Rethinking Test-Time Scaling for Medical AI: Model and Task-Aware Strategies for LLMs and VLMs
Rethinking Test-Time Scaling for Medical AI: Model and Task-Aware Strategies for LLMs and VLMs
Test-time scaling has recently emerged as a promising approach for enhancing the reasoning capabilities of large language models or vision-language models during inference. Although a variety of test-time scaling strategies have been proposed, and interest in their application to the medical domain is growing, many critical aspects remain underexplored, including their effectiveness for vision-language models and the identification of optimal strategies for different settings. In this paper, we conduct a comprehensive investigation of test-time scaling in the medical domain. We evaluate its impact on both large language models and vision-language models, considering factors such as model size, inherent model characteristics, and task complexity. Finally, we assess the robustness of these strategies under user-driven factors, such as misleading information embedded in prompts. Our findings offer practical guidelines for the effective use of test-time scaling in medical applications and provide insights into how these strategies can be further refined to meet the reliability and interpretability demands of the medical domain.
Gyutaek Oh、Seoyeon Kim、Sangjoon Park、Byung-Hoon Kim
医学研究方法医学现状、医学发展
Gyutaek Oh,Seoyeon Kim,Sangjoon Park,Byung-Hoon Kim.Rethinking Test-Time Scaling for Medical AI: Model and Task-Aware Strategies for LLMs and VLMs[EB/OL].(2025-06-16)[2025-07-25].https://arxiv.org/abs/2506.13102.点此复制
评论