|国家预印本平台
首页|Inv-Entropy: A Fully Probabilistic Framework for Uncertainty Quantification in Language Models

Inv-Entropy: A Fully Probabilistic Framework for Uncertainty Quantification in Language Models

Inv-Entropy: A Fully Probabilistic Framework for Uncertainty Quantification in Language Models

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have transformed natural language processing, but their reliable deployment requires effective uncertainty quantification (UQ). Existing UQ methods are often heuristic and lack a probabilistic foundation. This paper begins by providing a theoretical justification for the role of perturbations in UQ for LLMs. We then introduce a dual random walk perspective, modeling input-output pairs as two Markov chains with transition probabilities defined by semantic similarity. Building on this, we propose a fully probabilistic framework based on an inverse model, which quantifies uncertainty by evaluating the diversity of the input space conditioned on a given output through systematic perturbations. Within this framework, we define a new uncertainty measure, Inv-Entropy. A key strength of our framework is its flexibility: it supports various definitions of uncertainty measures, embeddings, perturbation strategies, and similarity metrics. We also propose GAAP, a perturbation algorithm based on genetic algorithms, which enhances the diversity of sampled inputs. In addition, we introduce a new evaluation metric, Temperature Sensitivity of Uncertainty (TSU), which directly assesses uncertainty without relying on correctness as a proxy. Extensive experiments demonstrate that Inv-Entropy outperforms existing semantic UQ methods. The code to reproduce the results can be found at https://github.com/UMDataScienceLab/Uncertainty-Quantification-for-LLMs.

Haoyi Song、Ruihan Ji、Naichen Shi、Fan Lai、Raed Al Kontar

计算技术、计算机技术

Haoyi Song,Ruihan Ji,Naichen Shi,Fan Lai,Raed Al Kontar.Inv-Entropy: A Fully Probabilistic Framework for Uncertainty Quantification in Language Models[EB/OL].(2025-06-11)[2025-07-19].https://arxiv.org/abs/2506.09684.点此复制

评论