Token-Level Uncertainty Estimation for Large Language Model Reasoning
Token-Level Uncertainty Estimation for Large Language Model Reasoning
While Large Language Models (LLMs) have demonstrated impressive capabilities, their output quality remains inconsistent across various application scenarios, making it difficult to identify trustworthy responses, especially in complex tasks requiring multi-step reasoning. In this paper, we propose a token-level uncertainty estimation framework to enable LLMs to self-assess and self-improve their generation quality in mathematical reasoning. Specifically, we introduce low-rank random weight perturbation to LLM decoding, generating predictive distributions that we use to estimate token-level uncertainties. We then aggregate these uncertainties to reflect semantic uncertainty of the generated sequences. Experiments on mathematical reasoning datasets of varying difficulty demonstrate that our token-level uncertainty metrics strongly correlate with answer correctness and model robustness. Additionally, we explore using uncertainty to directly enhance the model's reasoning performance through multiple generations and the particle filtering algorithm. Our approach consistently outperforms existing uncertainty estimation methods, establishing effective uncertainty estimation as a valuable tool for both evaluating and improving reasoning generation in LLMs.
Tunyu Zhang、Haizhou Shi、Yibin Wang、Hengyi Wang、Xiaoxiao He、Zhuowei Li、Haoxian Chen、Ligong Han、Kai Xu、Huan Zhang、Dimitris Metaxas、Hao Wang
计算技术、计算机技术
Tunyu Zhang,Haizhou Shi,Yibin Wang,Hengyi Wang,Xiaoxiao He,Zhuowei Li,Haoxian Chen,Ligong Han,Kai Xu,Huan Zhang,Dimitris Metaxas,Hao Wang.Token-Level Uncertainty Estimation for Large Language Model Reasoning[EB/OL].(2025-05-16)[2025-06-27].https://arxiv.org/abs/2505.11737.点此复制
评论