|国家预印本平台
首页|Teaching LLM to Reason: Reinforcement Learning from Algorithmic Problems without Code

Teaching LLM to Reason: Reinforcement Learning from Algorithmic Problems without Code

Teaching LLM to Reason: Reinforcement Learning from Algorithmic Problems without Code

来源:Arxiv_logoArxiv
英文摘要

Enhancing reasoning capabilities remains a central focus in the LLM reasearch community. A promising direction involves requiring models to simulate code execution step-by-step to derive outputs for given inputs. However, as code is often designed for large-scale systems, direct application leads to over-reliance on complex data structures and algorithms, even for simple cases, resulting in overfitting to algorithmic patterns rather than core reasoning structures. To address this, we propose TeaR, which aims at teaching LLMs to reason better. TeaR leverages careful data curation and reinforcement learning to guide models in discovering optimal reasoning paths through code-related tasks, thereby improving general reasoning abilities. We conduct extensive experiments using two base models and three long-CoT distillation models, with model sizes ranging from 1.5 billion to 32 billion parameters, and across 17 benchmarks spanning Math, Knowledge, Code, and Logical Reasoning. The results consistently show significant performance improvements. Notably, TeaR achieves a 35.9% improvement on Qwen2.5-7B and 5.9% on R1-Distilled-7B.

Keqin Bao、Nuo Chen、Xiaoyuan Li、Binyuan Hui、Bowen Yu、Fuli Feng、Xiangnan He、Dayiheng Liu

计算技术、计算机技术

Keqin Bao,Nuo Chen,Xiaoyuan Li,Binyuan Hui,Bowen Yu,Fuli Feng,Xiangnan He,Dayiheng Liu.Teaching LLM to Reason: Reinforcement Learning from Algorithmic Problems without Code[EB/OL].(2025-07-14)[2025-07-21].https://arxiv.org/abs/2507.07498.点此复制

评论