|国家预印本平台
首页|Efficient Post-Training Refinement of Latent Reasoning in Large Language Models

Efficient Post-Training Refinement of Latent Reasoning in Large Language Models

Efficient Post-Training Refinement of Latent Reasoning in Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Reasoning is a key component of language understanding in Large Language Models. While Chain-of-Thought prompting enhances performance via explicit intermediate steps, it suffers from sufficient token overhead and a fixed reasoning trajectory, preventing step-wise refinement. Recent advances in latent reasoning address these limitations by refining internal reasoning processes directly in the model's latent space, without producing explicit outputs. However, a key challenge remains: how to effectively update reasoning embeddings during post-training to guide the model toward more accurate solutions. To overcome this challenge, we propose a lightweight post-training framework that refines latent reasoning trajectories using two novel strategies: 1) Contrastive reasoning feedback, which compares reasoning embeddings against strong and weak baselines to infer effective update directions via embedding enhancement; 2) Residual embedding refinement, which stabilizes updates by progressively integrating current and historical gradients, enabling fast yet controlled convergence. Extensive experiments and case studies are conducted on five reasoning benchmarks to demonstrate the effectiveness of the proposed framework. Notably, a 5\% accuracy gain on MathQA without additional training.

Xinyuan Wang、Dongjie Wang、Wangyang Ying、Haoyue Bai、Nanxu Gong、Sixun Dong、Kunpeng Liu、Yanjie Fu

计算技术、计算机技术

Xinyuan Wang,Dongjie Wang,Wangyang Ying,Haoyue Bai,Nanxu Gong,Sixun Dong,Kunpeng Liu,Yanjie Fu.Efficient Post-Training Refinement of Latent Reasoning in Large Language Models[EB/OL].(2025-06-10)[2025-06-23].https://arxiv.org/abs/2506.08552.点此复制

评论