Disentangling Reasoning and Knowledge in Medical Large Language Models
Disentangling Reasoning and Knowledge in Medical Large Language Models
Medical reasoning in large language models (LLMs) aims to emulate clinicians' diagnostic thinking, but current benchmarks such as MedQA-USMLE, MedMCQA, and PubMedQA often mix reasoning with factual recall. We address this by separating 11 biomedical QA benchmarks into reasoning- and knowledge-focused subsets using a PubMedBERT classifier that reaches 81 percent accuracy, comparable to human performance. Our analysis shows that only 32.8 percent of questions require complex reasoning. We evaluate biomedical models (HuatuoGPT-o1, MedReason, m1) and general-domain models (DeepSeek-R1, o4-mini, Qwen3), finding consistent gaps between knowledge and reasoning performance. For example, m1 scores 60.5 on knowledge but only 47.1 on reasoning. In adversarial tests where models are misled with incorrect initial reasoning, biomedical models degrade sharply, while larger or RL-trained general models show more robustness. To address this, we train BioMed-R1 using fine-tuning and reinforcement learning on reasoning-heavy examples. It achieves the strongest performance among similarly sized models. Further gains may come from incorporating clinical case reports and training with adversarial and backtracking scenarios.
Rahul Thapa、Qingyang Wu、Kevin Wu、Harrison Zhang、Angela Zhang、Eric Wu、Haotian Ye、Suhana Bedi、Nevin Aresh、Joseph Boen、Shriya Reddy、Ben Athiwaratkun、Shuaiwen Leon Song、James Zou
医学研究方法医药卫生理论
Rahul Thapa,Qingyang Wu,Kevin Wu,Harrison Zhang,Angela Zhang,Eric Wu,Haotian Ye,Suhana Bedi,Nevin Aresh,Joseph Boen,Shriya Reddy,Ben Athiwaratkun,Shuaiwen Leon Song,James Zou.Disentangling Reasoning and Knowledge in Medical Large Language Models[EB/OL].(2025-05-16)[2025-06-23].https://arxiv.org/abs/2505.11462.点此复制
评论