|国家预印本平台
首页|Criteria-Based LLM Relevance Judgments

Criteria-Based LLM Relevance Judgments

Criteria-Based LLM Relevance Judgments

来源:Arxiv_logoArxiv
英文摘要

Relevance judgments are crucial for evaluating information retrieval systems, but traditional human-annotated labels are time-consuming and expensive. As a result, many researchers turn to automatic alternatives to accelerate method development. Among these, Large Language Models (LLMs) provide a scalable solution by generating relevance labels directly through prompting. However, prompting an LLM for a relevance label without constraints often results in not only incorrect predictions but also outputs that are difficult for humans to interpret. We propose the Multi-Criteria framework for LLM-based relevance judgments, decomposing the notion of relevance into multiple criteria--such as exactness, coverage, topicality, and contextual fit--to improve the robustness and interpretability of retrieval evaluations compared to direct grading methods. We validate this approach on three datasets: the TREC Deep Learning tracks from 2019 and 2020, as well as LLMJudge (based on TREC DL 2023). Our results demonstrate that Multi-Criteria judgments enhance the system ranking/leaderboard performance. Moreover, we highlight the strengths and limitations of this approach relative to direct grading approaches, offering insights that can guide the development of future automatic evaluation frameworks in information retrieval.

Naghmeh Farzi、Laura Dietz

10.1145/3731120.3744591

计算技术、计算机技术

Naghmeh Farzi,Laura Dietz.Criteria-Based LLM Relevance Judgments[EB/OL].(2025-07-13)[2025-07-25].https://arxiv.org/abs/2507.09488.点此复制

评论