|国家预印本平台
首页|Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs

Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs

Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs

来源:Arxiv_logoArxiv
英文摘要

We show that large language models (LLMs) exhibit an $\textit{internal chain-of-thought}$: they sequentially decompose and execute composite tasks layer-by-layer. Two claims ground our study: (i) distinct subtasks are learned at different network depths, and (ii) these subtasks are executed sequentially across layers. On a benchmark of 15 two-step composite tasks, we employ layer-from context-masking and propose a novel cross-task patching method, confirming (i). To examine claim (ii), we apply LogitLens to decode hidden states, revealing a consistent layerwise execution pattern. We further replicate our analysis on the real-world $\text{TRACE}$ benchmark, observing the same stepwise dynamics. Together, our results enhance LLMs transparency by showing their capacity to internally plan and execute subtasks (or instructions), opening avenues for fine-grained, instruction-level activation steering.

Zhipeng Yang、Junzhuo Li、Siyu Xia、Xuming Hu

计算技术、计算机技术

Zhipeng Yang,Junzhuo Li,Siyu Xia,Xuming Hu.Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs[EB/OL].(2025-05-20)[2025-06-17].https://arxiv.org/abs/2505.14530.点此复制

评论