Introducing Answered with Evidence -- a framework for evaluating whether LLM responses to biomedical questions are founded in evidence
Introducing Answered with Evidence -- a framework for evaluating whether LLM responses to biomedical questions are founded in evidence
The growing use of large language models (LLMs) for biomedical question answering raises concerns about the accuracy and evidentiary support of their responses. To address this, we present Answered with Evidence, a framework for evaluating whether LLM-generated answers are grounded in scientific literature. We analyzed thousands of physician-submitted questions using a comparative pipeline that included: (1) Alexandria, fka the Atropos Evidence Library, a retrieval-augmented generation (RAG) system based on novel observational studies, and (2) two PubMed-based retrieval-augmented systems (System and Perplexity). We found that PubMed-based systems provided evidence-supported answers for approximately 44% of questions, while the novel evidence source did so for about 50%. Combined, these sources enabled reliable answers to over 70% of biomedical queries. As LLMs become increasingly capable of summarizing scientific content, maximizing their value will require systems that can accurately retrieve both published and custom-generated evidence or generate such evidence in real time.
Julian D Baldwin、Christina Dinh、Arjun Mukerji、Neil Sanghavi、Saurabh Gombar
医学现状、医学发展医学研究方法生物科学研究方法、生物科学研究技术
Julian D Baldwin,Christina Dinh,Arjun Mukerji,Neil Sanghavi,Saurabh Gombar.Introducing Answered with Evidence -- a framework for evaluating whether LLM responses to biomedical questions are founded in evidence[EB/OL].(2025-06-30)[2025-07-21].https://arxiv.org/abs/2507.02975.点此复制
评论