|国家预印本平台
首页|Instruction Tuning and CoT Prompting for Contextual Medical QA with LLMs

Instruction Tuning and CoT Prompting for Contextual Medical QA with LLMs

Instruction Tuning and CoT Prompting for Contextual Medical QA with LLMs

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have shown great potential in medical question answering (MedQA), yet adapting them to biomedical reasoning remains challenging due to domain-specific complexity and limited supervision. In this work, we study how prompt design and lightweight fine-tuning affect the performance of open-source LLMs on PubMedQA, a benchmark for multiple-choice biomedical questions. We focus on two widely used prompting strategies - standard instruction prompts and Chain-of-Thought (CoT) prompts - and apply QLoRA for parameter-efficient instruction tuning. Across multiple model families and sizes, our experiments show that CoT prompting alone can improve reasoning in zero-shot settings, while instruction tuning significantly boosts accuracy. However, fine-tuning on CoT prompts does not universally enhance performance and may even degrade it for certain larger models. These findings suggest that reasoning-aware prompts are useful, but their benefits are model- and scale-dependent. Our study offers practical insights into combining prompt engineering with efficient finetuning for medical QA applications.

Chenqian Le、Ziheng Gong、Chihang Wang、Haowei Ni、Panfeng Li、Xupeng Chen

医学研究方法

Chenqian Le,Ziheng Gong,Chihang Wang,Haowei Ni,Panfeng Li,Xupeng Chen.Instruction Tuning and CoT Prompting for Contextual Medical QA with LLMs[EB/OL].(2025-06-13)[2025-06-28].https://arxiv.org/abs/2506.12182.点此复制

评论