|国家预印本平台
首页|Mixture of Reasonings: Teach Large Language Models to Reason with Adaptive Strategies

Mixture of Reasonings: Teach Large Language Models to Reason with Adaptive Strategies

Mixture of Reasonings: Teach Large Language Models to Reason with Adaptive Strategies

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) excel in complex tasks through advanced prompting techniques like Chain-of-Thought (CoT) and Tree-of-Thought (ToT), but their reliance on manually crafted, task-specific prompts limits adaptability and efficiency. We introduce Mixture of Reasoning (MoR), a training framework that embeds diverse reasoning strategies into LLMs for autonomous, task-adaptive reasoning without external prompt engineering. MoR has two phases: Thought Generation, creating reasoning chain templates with models like GPT-4o, and SFT Dataset Construction, pairing templates with benchmark datasets for supervised fine-tuning. Our experiments show that MoR significantly enhances performance, with MoR150 achieving 0.730 (2.2% improvement) using CoT prompting and 0.734 (13.5% improvement) compared to baselines. MoR eliminates the need for task-specific prompts, offering a generalizable solution for robust reasoning across diverse tasks.

Tao Xiong、Xavier Hu、Wenyan Fan、Shengyu Zhang

计算技术、计算机技术

Tao Xiong,Xavier Hu,Wenyan Fan,Shengyu Zhang.Mixture of Reasonings: Teach Large Language Models to Reason with Adaptive Strategies[EB/OL].(2025-07-03)[2025-07-16].https://arxiv.org/abs/2507.00606.点此复制

评论