Pause Tokens Strictly Increase the Expressivity of Constant-Depth Transformers
Pause Tokens Strictly Increase the Expressivity of Constant-Depth Transformers
Pause tokens, simple filler symbols such as "...", consistently improve Transformer performance on both language and mathematical tasks, yet their theoretical effect remains unexplained. We provide the first formal separation result, proving that adding pause tokens to constant-depth, logarithmic-width Transformers strictly increases their computational expressivity. With bounded-precision activations, Transformers without pause tokens compute only a strict subset of $\mathsf{AC}^0$ functions, while adding a polynomial number of pause tokens allows them to express the entire class. For logarithmic-precision Transformers, we show that adding pause tokens achieves expressivity equivalent to $\mathsf{TC}^0$, matching known upper bounds. Empirically, we demonstrate that two-layer causally masked Transformers can learn parity when supplied with pause tokens, a function that they appear unable to learn without them. Our results provide a rigorous theoretical explanation for prior empirical findings, clarify how pause tokens interact with width, depth, and numeric precision, and position them as a distinct mechanism, complementary to chain-of-thought prompting, for enhancing Transformer reasoning.
Charles London、Varun Kanade
计算技术、计算机技术
Charles London,Varun Kanade.Pause Tokens Strictly Increase the Expressivity of Constant-Depth Transformers[EB/OL].(2025-05-27)[2025-06-17].https://arxiv.org/abs/2505.21024.点此复制
评论