|国家预印本平台
首页|Post-Training Large Language Models via Reinforcement Learning from Self-Feedback

Post-Training Large Language Models via Reinforcement Learning from Self-Feedback

Post-Training Large Language Models via Reinforcement Learning from Self-Feedback

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) often produce plausible but poorly-calibrated answers, limiting their reliability on reasoning-intensive tasks. We present Reinforcement Learning from Self-Feedback (RLSF), a post-training stage that uses the model's own confidence as an intrinsic reward, mimicking how humans learn in the absence of external feedback. After a frozen LLM generates several chain-of-thought solutions, we define and compute the confidence of each final answer span and rank the traces accordingly. These synthetic preferences are then used to fine-tune the policy with standard preference optimization, similar to RLHF yet requiring no human labels, gold answers, or externally curated rewards. RLSF simultaneously (i) refines the model's probability estimates -- restoring well-behaved calibration -- and (ii) strengthens step-by-step reasoning, yielding improved performance on arithmetic reasoning and multiple-choice question answering. By turning a model's own uncertainty into useful self-feedback, RLSF affirms reinforcement learning on intrinsic model behaviour as a principled and data-efficient component of the LLM post-training pipeline and warrents further research in intrinsic rewards for LLM post-training.

Carel van Niekerk、Renato Vukovic、Benjamin Matthias Ruppik、Hsien-chin Lin、Milica Gašić

计算技术、计算机技术

Carel van Niekerk,Renato Vukovic,Benjamin Matthias Ruppik,Hsien-chin Lin,Milica Gašić.Post-Training Large Language Models via Reinforcement Learning from Self-Feedback[EB/OL].(2025-07-29)[2025-08-11].https://arxiv.org/abs/2507.21931.点此复制

评论