|国家预印本平台
首页|SPEAR: Subset-sampled Performance Evaluation via Automated Ground Truth Generation for RAG

SPEAR: Subset-sampled Performance Evaluation via Automated Ground Truth Generation for RAG

SPEAR: Subset-sampled Performance Evaluation via Automated Ground Truth Generation for RAG

来源:Arxiv_logoArxiv
英文摘要

Retrieval-Augmented Generation (RAG) is a core approach for enhancing Large Language Models (LLMs), where the effectiveness of the retriever largely determines the overall response quality of RAG systems. Retrievers encompass a multitude of hyperparameters that significantly impact performance outcomes and demonstrate sensitivity to specific applications. Nevertheless, hyperparameter optimization entails prohibitively high computational expenses. Existing evaluation methods suffer from either prohibitive costs or disconnection from domain-specific scenarios. This paper proposes SEARA (Subset sampling Evaluation for Automatic Retriever Assessment), which addresses evaluation data challenges through subset sampling techniques and achieves robust automated retriever evaluation by minimal retrieval facts extraction and comprehensive retrieval metrics. Based on real user queries, this method enables fully automated retriever evaluation at low cost, thereby obtaining optimal retriever for specific business scenarios. We validate our method across classic RAG applications in rednote, including knowledge-based Q&A system and retrieval-based travel assistant, successfully obtaining scenario-specific optimal retrievers.

Zou Yuheng、Wang Yiran、Tian Yuzhu、Zhu Min、Huang Yanhua

计算技术、计算机技术

Zou Yuheng,Wang Yiran,Tian Yuzhu,Zhu Min,Huang Yanhua.SPEAR: Subset-sampled Performance Evaluation via Automated Ground Truth Generation for RAG[EB/OL].(2025-07-09)[2025-07-22].https://arxiv.org/abs/2507.06554.点此复制

评论