|国家预印本平台
首页|Can Large Language Models Match the Conclusions of Systematic Reviews?

Can Large Language Models Match the Conclusions of Systematic Reviews?

Can Large Language Models Match the Conclusions of Systematic Reviews?

来源:Arxiv_logoArxiv
英文摘要

Systematic reviews (SR), in which experts summarize and analyze evidence across individual studies to provide insights on a specialized topic, are a cornerstone for evidence-based clinical decision-making, research, and policy. Given the exponential growth of scientific articles, there is growing interest in using large language models (LLMs) to automate SR generation. However, the ability of LLMs to critically assess evidence and reason across multiple documents to provide recommendations at the same proficiency as domain experts remains poorly characterized. We therefore ask: Can LLMs match the conclusions of systematic reviews written by clinical experts when given access to the same studies? To explore this question, we present MedEvidence, a benchmark pairing findings from 100 SRs with the studies they are based on. We benchmark 24 LLMs on MedEvidence, including reasoning, non-reasoning, medical specialist, and models across varying sizes (from 7B-700B). Through our systematic evaluation, we find that reasoning does not necessarily improve performance, larger models do not consistently yield greater gains, and knowledge-based fine-tuning degrades accuracy on MedEvidence. Instead, most models exhibit similar behavior: performance tends to degrade as token length increases, their responses show overconfidence, and, contrary to human experts, all models show a lack of scientific skepticism toward low-quality findings. These results suggest that more work is still required before LLMs can reliably match the observations from expert-conducted SRs, even though these systems are already deployed and being used by clinicians. We release our codebase and benchmark to the broader research community to further investigate LLM-based SR systems.

Christopher Polzak、Alejandro Lozano、Min Woo Sun、James Burgess、Yuhui Zhang、Kevin Wu、Serena Yeung-Levy

医学研究方法计算技术、计算机技术

Christopher Polzak,Alejandro Lozano,Min Woo Sun,James Burgess,Yuhui Zhang,Kevin Wu,Serena Yeung-Levy.Can Large Language Models Match the Conclusions of Systematic Reviews?[EB/OL].(2025-05-28)[2025-06-08].https://arxiv.org/abs/2505.22787.点此复制

评论