|国家预印本平台
首页|Evaluating Prompt Engineering Techniques for Accuracy and Confidence Elicitation in Medical LLMs

Evaluating Prompt Engineering Techniques for Accuracy and Confidence Elicitation in Medical LLMs

Evaluating Prompt Engineering Techniques for Accuracy and Confidence Elicitation in Medical LLMs

来源:Arxiv_logoArxiv
英文摘要

This paper investigates how prompt engineering techniques impact both accuracy and confidence elicitation in Large Language Models (LLMs) applied to medical contexts. Using a stratified dataset of Persian board exam questions across multiple specialties, we evaluated five LLMs - GPT-4o, o3-mini, Llama-3.3-70b, Llama-3.1-8b, and DeepSeek-v3 - across 156 configurations. These configurations varied in temperature settings (0.3, 0.7, 1.0), prompt styles (Chain-of-Thought, Few-Shot, Emotional, Expert Mimicry), and confidence scales (1-10, 1-100). We used AUC-ROC, Brier Score, and Expected Calibration Error (ECE) to evaluate alignment between confidence and actual performance. Chain-of-Thought prompts improved accuracy but also led to overconfidence, highlighting the need for calibration. Emotional prompting further inflated confidence, risking poor decisions. Smaller models like Llama-3.1-8b underperformed across all metrics, while proprietary models showed higher accuracy but still lacked calibrated confidence. These results suggest prompt engineering must address both accuracy and uncertainty to be effective in high-stakes medical tasks.

Nariman Naderi、Zahra Atf、Peter R Lewis、Aref Mahjoub far、Seyed Amir Ahmad Safavi-Naini、Ali Soroush

医学研究方法计算技术、计算机技术

Nariman Naderi,Zahra Atf,Peter R Lewis,Aref Mahjoub far,Seyed Amir Ahmad Safavi-Naini,Ali Soroush.Evaluating Prompt Engineering Techniques for Accuracy and Confidence Elicitation in Medical LLMs[EB/OL].(2025-05-29)[2025-07-16].https://arxiv.org/abs/2506.00072.点此复制

评论