|国家预印本平台
首页|Knockout LLM Assessment: Using Large Language Models for Evaluations through Iterative Pairwise Comparisons

Knockout LLM Assessment: Using Large Language Models for Evaluations through Iterative Pairwise Comparisons

Knockout LLM Assessment: Using Large Language Models for Evaluations through Iterative Pairwise Comparisons

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) have shown to be effective evaluators across various domains such as machine translations or the scientific domain. Current LLM-as-a-Judge approaches rely mostly on individual assessments or a single round of pairwise assessments, preventing the judge LLM from developing a global ranking perspective. To address this, we present Knockout Assessment, an LLM-asa Judge method using a knockout tournament system with iterative pairwise comparisons. Experiments across three LLMs on two datasets show that knockout assessment improves scoring accuracy, increasing Pearson correlation with expert evaluations by 0.07 on average for university-level exam scoring and machine translation evaluations, aligning LLM assessments more closely with human scoring.

Isik Baran Sandan、Tu Anh Dinh、Jan Niehues

语言学

Isik Baran Sandan,Tu Anh Dinh,Jan Niehues.Knockout LLM Assessment: Using Large Language Models for Evaluations through Iterative Pairwise Comparisons[EB/OL].(2025-06-04)[2025-06-30].https://arxiv.org/abs/2506.03785.点此复制

评论