|国家预印本平台
首页|Large Reasoning Models are not thinking straight: on the unreliability of thinking trajectories

Large Reasoning Models are not thinking straight: on the unreliability of thinking trajectories

Large Reasoning Models are not thinking straight: on the unreliability of thinking trajectories

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) trained via Reinforcement Learning (RL) have recently achieved impressive results on reasoning benchmarks. Yet, growing evidence shows that these models often generate longer but ineffective chains of thought (CoTs), calling into question whether benchmark gains reflect real reasoning improvements. We present new evidence of overthinking, where models disregard correct solutions even when explicitly provided, instead continuing to generate unnecessary reasoning steps that often lead to incorrect conclusions. Experiments on three state-of-the-art models using the AIME2024 math benchmark reveal critical limitations in these models ability to integrate corrective information, posing new challenges for achieving robust and interpretable reasoning.

Jhouben Cuesta-Ramirez、Samuel Beaussant、Mehdi Mounsif

计算技术、计算机技术

Jhouben Cuesta-Ramirez,Samuel Beaussant,Mehdi Mounsif.Large Reasoning Models are not thinking straight: on the unreliability of thinking trajectories[EB/OL].(2025-07-01)[2025-07-16].https://arxiv.org/abs/2507.00711.点此复制

评论