|国家预印本平台
首页|Read Quietly, Think Aloud: Decoupling Comprehension and Reasoning in LLMs

Read Quietly, Think Aloud: Decoupling Comprehension and Reasoning in LLMs

Read Quietly, Think Aloud: Decoupling Comprehension and Reasoning in LLMs

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) have demonstrated remarkable proficiency in understanding text and generating high-quality responses. However, a critical distinction from human cognition is their typical lack of a distinct internal `reading' or deliberation phase before `speaking' (i.e., generating text). Humans often engage in silent reading to comprehend context and formulate thoughts prior to articulation. This paper investigates methods to imbue LLMs with a similar capacity for internal processing. We introduce and evaluate techniques that encourage LLMs to `read silently.' Our findings indicate that even a straightforward approach, such as providing the model with an initial contextual prompt or `reading space' before it begins predicting subsequent tokens for the final output, can yield significant performance improvements. We further enhance this concept by developing a `reading buddy' architecture, where an auxiliary component silently processes the input and provides refined contextual insights to the primary generation model. These approaches aim to foster deeper understanding from LLMs so that they can produce better reasoned responses, moving them one step closer to more human-like text processing. Our results indicate that these simple techniques can provide surprisingly strong impact on accuracy with multiple point accuracy boost.

Yuanxin Wang、Ganesh Venkatesh

计算技术、计算机技术

Yuanxin Wang,Ganesh Venkatesh.Read Quietly, Think Aloud: Decoupling Comprehension and Reasoning in LLMs[EB/OL].(2025-07-04)[2025-07-19].https://arxiv.org/abs/2507.03327.点此复制

评论