CiteEval: Principle-Driven Citation Evaluation for Source Attribution
CiteEval: Principle-Driven Citation Evaluation for Source Attribution
Citation quality is crucial in information-seeking systems, directly influencing trust and the effectiveness of information access. Current evaluation frameworks, both human and automatic, mainly rely on Natural Language Inference (NLI) to assess binary or ternary supportiveness from cited sources, which we argue is a suboptimal proxy for citation evaluation. In this work we introduce CiteEval, a citation evaluation framework driven by principles focusing on fine-grained citation assessment within a broad context, encompassing not only the cited sources but the full retrieval context, user query, and generated text. Guided by the proposed framework, we construct CiteBench, a multi-domain benchmark with high-quality human annotations on citation quality. To enable efficient evaluation, we further develop CiteEval-Auto, a suite of model-based metrics that exhibit strong correlation with human judgments. Experiments across diverse systems demonstrate CiteEval-Auto's superior ability to capture the multifaceted nature of citations compared to existing metrics, offering a principled and scalable approach to evaluate and improve model-generated citations.
Yumo Xu、Peng Qi、Jifan Chen、Kunlun Liu、Rujun Han、Lan Liu、Bonan Min、Vittorio Castelli、Arshit Gupta、Zhiguo Wang
计算技术、计算机技术
Yumo Xu,Peng Qi,Jifan Chen,Kunlun Liu,Rujun Han,Lan Liu,Bonan Min,Vittorio Castelli,Arshit Gupta,Zhiguo Wang.CiteEval: Principle-Driven Citation Evaluation for Source Attribution[EB/OL].(2025-06-02)[2025-07-09].https://arxiv.org/abs/2506.01829.点此复制
评论