|国家预印本平台
首页|Revisiting Uncertainty Quantification Evaluation in Language Models: Spurious Interactions with Response Length Bias Results

Revisiting Uncertainty Quantification Evaluation in Language Models: Spurious Interactions with Response Length Bias Results

Revisiting Uncertainty Quantification Evaluation in Language Models: Spurious Interactions with Response Length Bias Results

来源:Arxiv_logoArxiv
英文摘要

Uncertainty Quantification (UQ) in Language Models (LMs) is key to improving their safety and reliability. Evaluations often use metrics like AUROC to assess how well UQ methods (e.g., negative sequence probabilities) correlate with task correctness functions (e.g., ROUGE-L). We show that mutual biases--when both UQ methods and correctness functions are biased by the same factors--systematically distort evaluation. First, we formally prove that any mutual bias non-randomly skews AUROC rankings, compromising benchmark integrity. Second, we confirm this happens empirically by testing 7 widely used correctness functions, from lexical-based and embedding-based metrics to LM-as-a-judge approaches, across 4 datasets x 4 models x 8 UQ methods. Our analysis shows that length biases in correctness functions distort UQ assessments by interacting with length biases in UQ methods. We identify LM-as-a-judge methods as the least length-biased, offering a promising path for a fairer UQ evaluation.

Andrea Santilli、Adam Golinski、Michael Kirchhof、Federico Danieli、Arno Blaas、Miao Xiong、Luca Zappella、Sinead Williamson

计算技术、计算机技术

Andrea Santilli,Adam Golinski,Michael Kirchhof,Federico Danieli,Arno Blaas,Miao Xiong,Luca Zappella,Sinead Williamson.Revisiting Uncertainty Quantification Evaluation in Language Models: Spurious Interactions with Response Length Bias Results[EB/OL].(2025-04-18)[2025-06-24].https://arxiv.org/abs/2504.13677.点此复制

评论