Evaluating the Retrieval Robustness of Large Language Models
Evaluating the Retrieval Robustness of Large Language Models
Retrieval-augmented generation (RAG) generally enhances large language models' (LLMs) ability to solve knowledge-intensive tasks. But RAG may also lead to performance degradation due to imperfect retrieval and the model's limited ability to leverage retrieved content. In this work, we evaluate the robustness of LLMs in practical RAG setups (henceforth retrieval robustness). We focus on three research questions: (1) whether RAG is always better than non-RAG; (2) whether more retrieved documents always lead to better performance; (3) and whether document orders impact results. To facilitate this study, we establish a benchmark of 1500 open-domain questions, each with retrieved documents from Wikipedia. We introduce three robustness metrics, each corresponds to one research question. Our comprehensive experiments, involving 11 LLMs and 3 prompting strategies, reveal that all of these LLMs exhibit surprisingly high retrieval robustness; nonetheless, different degrees of imperfect robustness hinders them from fully utilizing the benefits of RAG.
Shuyang Cao、Karthik Radhakrishnan、David Rosenberg、Steven Lu、Pengxiang Cheng、Lu Wang、Shiyue Zhang
计算技术、计算机技术
Shuyang Cao,Karthik Radhakrishnan,David Rosenberg,Steven Lu,Pengxiang Cheng,Lu Wang,Shiyue Zhang.Evaluating the Retrieval Robustness of Large Language Models[EB/OL].(2025-05-27)[2025-06-14].https://arxiv.org/abs/2505.21870.点此复制
评论