|国家预印本平台
首页|Reasoning Strategies in Large Language Models: Can They Follow, Prefer, and Optimize?

Reasoning Strategies in Large Language Models: Can They Follow, Prefer, and Optimize?

Reasoning Strategies in Large Language Models: Can They Follow, Prefer, and Optimize?

来源:Arxiv_logoArxiv
英文摘要

Human reasoning involves different strategies, each suited to specific problems. Prior work shows that large language model (LLMs) tend to favor a single reasoning strategy, potentially limiting their effectiveness in diverse reasoning challenges. In this work, we investigate whether prompting can control LLMs reasoning strategies and assess its impact on logical problem-solving. While our experiments show that no single strategy consistently improves accuracy, performance could be enhanced if models could adaptively choose the optimal strategy. We propose methods to guide LLMs in strategy selection, highlighting new ways to refine their reasoning abilities.

Yanjian Zhang、Guillaume Wisniewski、Nadi Tomeh、Thierry Charnois

计算技术、计算机技术

Yanjian Zhang,Guillaume Wisniewski,Nadi Tomeh,Thierry Charnois.Reasoning Strategies in Large Language Models: Can They Follow, Prefer, and Optimize?[EB/OL].(2025-07-16)[2025-08-02].https://arxiv.org/abs/2507.11423.点此复制

评论