Learning to Plan via Supervised Contrastive Learning and Strategic Interpolation: A Chess Case Study
Learning to Plan via Supervised Contrastive Learning and Strategic Interpolation: A Chess Case Study
Modern chess engines achieve superhuman performance through deep tree search and regressive evaluation, while human players rely on intuition to select candidate moves followed by a shallow search to validate them. To model this intuition-driven planning process, we train a transformer encoder using supervised contrastive learning to embed board states into a latent space structured by positional evaluation. In this space, distance reflects evaluative similarity, and visualized trajectories display interpretable transitions between game states. We demonstrate that move selection can occur entirely within this embedding space by advancing toward favorable regions, without relying on deep search. Despite using only a 6-ply beam search, our model achieves an estimated Elo rating of 2593. Performance improves with both model size and embedding dimensionality, suggesting that latent planning may offer a viable alternative to traditional search. Although we focus on chess, the proposed embedding-based planning method can be generalized to other perfect-information games where state evaluations are learnable. All source code is available at https://github.com/andrewhamara/SOLIS.
Andrew Hamara、Greg Hamerly、Pablo Rivas、Andrew C. Freeman
计算技术、计算机技术
Andrew Hamara,Greg Hamerly,Pablo Rivas,Andrew C. Freeman.Learning to Plan via Supervised Contrastive Learning and Strategic Interpolation: A Chess Case Study[EB/OL].(2025-06-05)[2025-07-16].https://arxiv.org/abs/2506.04892.点此复制
评论