Rewarding Doubt: A Reinforcement Learning Approach to Confidence Calibration of Large Language Models
Rewarding Doubt: A Reinforcement Learning Approach to Confidence Calibration of Large Language Models
A safe and trustworthy use of Large Language Models (LLMs) requires an accurate expression of confidence in their answers. We introduce a novel Reinforcement Learning (RL) approach for LLM calibration that fine-tunes LLMs to elicit calibrated confidence estimations in their answers to factual questions. We model the problem as a betting game where the model predicts a confidence score together with every answer, and design a reward function that penalizes both over and under-confidence. We prove that under our reward design an optimal policy would result in a perfectly calibrated confidence estimation. Our experiments demonstrate significantly improved confidence calibration and generalization to new tasks without re-training, indicating that our approach teaches a general confidence awareness. This approach enables the training of inherently calibrated LLMs.
Matthias Keicher、Nassir Navab、Ege ?zsoy、Kamilia Zaripova、Chantal Pellegrini、David Bani-Harouni、Paul Stangel
计算技术、计算机技术
Matthias Keicher,Nassir Navab,Ege ?zsoy,Kamilia Zaripova,Chantal Pellegrini,David Bani-Harouni,Paul Stangel.Rewarding Doubt: A Reinforcement Learning Approach to Confidence Calibration of Large Language Models[EB/OL].(2025-03-04)[2025-05-24].https://arxiv.org/abs/2503.02623.点此复制
评论