Understanding Aha Moments: from External Observations to Internal Mechanisms
Understanding Aha Moments: from External Observations to Internal Mechanisms
Large Reasoning Models (LRMs), capable of reasoning through complex problems, have become crucial for tasks like programming, mathematics, and commonsense reasoning. However, a key challenge lies in understanding how these models acquire reasoning capabilities and exhibit "aha moments" when they reorganize their methods to allocate more thinking time to problems. In this work, we systematically study "aha moments" in LRMs, from linguistic patterns, description of uncertainty, "Reasoning Collapse" to analysis in latent space. We demonstrate that the "aha moment" is externally manifested in a more frequent use of anthropomorphic tones for self-reflection and an adaptive adjustment of uncertainty based on problem difficulty. This process helps the model complete reasoning without succumbing to "Reasoning Collapse". Internally, it corresponds to a separation between anthropomorphic characteristics and pure reasoning, with an increased anthropomorphic tone for more difficult problems. Furthermore, we find that the "aha moment" helps models solve complex problems by altering their perception of problem difficulty. As the layer of the model increases, simpler problems tend to be perceived as more complex, while more difficult problems appear simpler.
Derek F. Wong、Junchao Wu、Xin Chen、Yunze Xiao、Xinyi Yang、Di Wang、Shu Yang
计算技术、计算机技术自动化基础理论
Derek F. Wong,Junchao Wu,Xin Chen,Yunze Xiao,Xinyi Yang,Di Wang,Shu Yang.Understanding Aha Moments: from External Observations to Internal Mechanisms[EB/OL].(2025-04-03)[2025-04-28].https://arxiv.org/abs/2504.02956.点此复制
评论