|国家预印本平台
首页|DEBATE, TRAIN, EVOLVE: Self Evolution of Language Model Reasoning

DEBATE, TRAIN, EVOLVE: Self Evolution of Language Model Reasoning

DEBATE, TRAIN, EVOLVE: Self Evolution of Language Model Reasoning

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have improved significantly in their reasoning through extensive training on massive datasets. However, relying solely on additional data for improvement is becoming increasingly impractical, highlighting the need for models to autonomously enhance their reasoning without external supervision. In this paper, we propose Debate, Train, Evolve (DTE), a novel ground truth-free training framework that uses multi-agent debate traces to evolve a single language model. We also introduce a new prompting strategy Reflect-Critique-Refine, to improve debate quality by explicitly instructing agents to critique and refine their reasoning. Extensive evaluations on five reasoning benchmarks with six open-weight models show that our DTE framework achieve substantial improvements, with an average accuracy gain of 8.92% on the challenging GSM-PLUS dataset. Furthermore, we observe strong cross-domain generalization, with an average accuracy gain of 5.8% on all other benchmarks, suggesting that our method captures general reasoning capabilities.

Gaurav Srivastava、Zhenyu Bi、Meng Lu、Xuan Wang

计算技术、计算机技术

Gaurav Srivastava,Zhenyu Bi,Meng Lu,Xuan Wang.DEBATE, TRAIN, EVOLVE: Self Evolution of Language Model Reasoning[EB/OL].(2025-05-21)[2025-06-04].https://arxiv.org/abs/2505.15734.点此复制

评论