|国家预印本平台
首页|Reasoning about Uncertainty: Do Reasoning Models Know When They Don't Know?

Reasoning about Uncertainty: Do Reasoning Models Know When They Don't Know?

Reasoning about Uncertainty: Do Reasoning Models Know When They Don't Know?

来源:Arxiv_logoArxiv
英文摘要

Reasoning language models have set state-of-the-art (SOTA) records on many challenging benchmarks, enabled by multi-step reasoning induced using reinforcement learning. However, like previous language models, reasoning models are prone to generating confident, plausible responses that are incorrect (hallucinations). Knowing when and how much to trust these models is critical to the safe deployment of reasoning models in real-world applications. To this end, we explore uncertainty quantification of reasoning models in this work. Specifically, we ask three fundamental questions: First, are reasoning models well-calibrated? Second, does deeper reasoning improve model calibration? Finally, inspired by humans' innate ability to double-check their thought processes to verify the validity of their answers and their confidence, we ask: can reasoning models improve their calibration by explicitly reasoning about their chain-of-thought traces? We introduce introspective uncertainty quantification (UQ) to explore this direction. In extensive evaluations on SOTA reasoning models across a broad range of benchmarks, we find that reasoning models: (i) are typically overconfident, with self-verbalized confidence estimates often greater than 85% particularly for incorrect responses, (ii) become even more overconfident with deeper reasoning, and (iii) can become better calibrated through introspection (e.g., o3-Mini and DeepSeek R1) but not uniformly (e.g., Claude 3.7 Sonnet becomes more poorly calibrated). Lastly, we conclude with important research directions to design necessary UQ benchmarks and improve the calibration of reasoning models.

Zhiting Mei、Christina Zhang、Tenny Yin、Justin Lidard、Ola Shorinwa、Anirudha Majumdar

计算技术、计算机技术

Zhiting Mei,Christina Zhang,Tenny Yin,Justin Lidard,Ola Shorinwa,Anirudha Majumdar.Reasoning about Uncertainty: Do Reasoning Models Know When They Don't Know?[EB/OL].(2025-06-22)[2025-07-03].https://arxiv.org/abs/2506.18183.点此复制

评论