|国家预印本平台
首页|Can Large Language Models Develop Strategic Reasoning? Post-training Insights from Learning Chess

Can Large Language Models Develop Strategic Reasoning? Post-training Insights from Learning Chess

Can Large Language Models Develop Strategic Reasoning? Post-training Insights from Learning Chess

来源:Arxiv_logoArxiv
英文摘要

While reinforcement learning (RL) for large language models (LLMs) has shown promise in mathematical reasoning, strategic reasoning for LLMs using RL remains largely unexplored. We investigate whether LLMs can develop strategic reasoning capabilities through RL in chess. To this end, we leverage a chess-pretrained action-value network to provide dense reward on the LLM's output move quality, which can be seen as a form of knowledge distillation. Our experiments show that our distillation-based dense rewards often outperform sparse binary rewards. However, surprisingly, all models plateau far below expert levels. We provide SFT and RL ablations on chess reasoning training and find evidence that this limitation stems from a deficit in the pretrained models' internal understanding of chess--a deficit which RL alone may not be able to fully overcome.

Dongyoon Hwang、Hojoon Lee、Jaegul Choo、Dongmin Park、Jongho Park

计算技术、计算机技术

Dongyoon Hwang,Hojoon Lee,Jaegul Choo,Dongmin Park,Jongho Park.Can Large Language Models Develop Strategic Reasoning? Post-training Insights from Learning Chess[EB/OL].(2025-07-02)[2025-07-18].https://arxiv.org/abs/2507.00726.点此复制

评论