|国家预印本平台
首页|From Thinking to Output: Chain-of-Thought and Text Generation Characteristics in Reasoning Language Models

From Thinking to Output: Chain-of-Thought and Text Generation Characteristics in Reasoning Language Models

From Thinking to Output: Chain-of-Thought and Text Generation Characteristics in Reasoning Language Models

来源:Arxiv_logoArxiv
英文摘要

Recently, there have been notable advancements in large language models (LLMs), demonstrating their growing abilities in complex reasoning. However, existing research largely overlooks a thorough and systematic comparison of these models' reasoning processes and outputs, particularly regarding their self-reflection pattern (also termed "Aha moment") and the interconnections across diverse domains. This paper proposes a novel framework for analyzing the reasoning characteristics of four cutting-edge large reasoning models (GPT-o1, DeepSeek-R1, Kimi-k1.5, and Grok-3) using keywords statistic and LLM-as-a-judge paradigm. Our approach connects their internal thinking processes with their final outputs. A diverse dataset consists of real-world scenario-based questions covering logical deduction, causal inference, and multi-step problem-solving. Additionally, a set of metrics is put forward to assess both the coherence of reasoning and the accuracy of the outputs. The research results uncover various patterns of how these models balance exploration and exploitation, deal with problems, and reach conclusions during the reasoning process. Through quantitative and qualitative comparisons, disparities among these models are identified in aspects such as the depth of reasoning, the reliance on intermediate steps, and the degree of similarity between their thinking processes and output patterns and those of GPT-o1. This work offers valuable insights into the trade-off between computational efficiency and reasoning robustness and provides practical recommendations for enhancing model design and evaluation in practical applications. We publicly release our project at: https://github.com/ChangWenhan/FromThinking2Output

Junhao Liu、Zhenhao Xu、Yuxin Fang、Yichuan Chen、Zuobin Ying、Wenhan Chang

语言学计算技术、计算机技术

Junhao Liu,Zhenhao Xu,Yuxin Fang,Yichuan Chen,Zuobin Ying,Wenhan Chang.From Thinking to Output: Chain-of-Thought and Text Generation Characteristics in Reasoning Language Models[EB/OL].(2025-06-20)[2025-07-09].https://arxiv.org/abs/2506.21609.点此复制

评论