Spelling-out is not Straightforward: LLMs' Capability of Tokenization from Token to Characters
Spelling-out is not Straightforward: LLMs' Capability of Tokenization from Token to Characters
Large language models (LLMs) can spell out tokens character by character with high accuracy, yet they struggle with more complex character-level tasks, such as identifying compositional subcomponents within tokens. In this work, we investigate how LLMs internally represent and utilize character-level information during the spelling-out process. Our analysis reveals that, although spelling out is a simple task for humans, it is not handled in a straightforward manner by LLMs. Specifically, we show that the embedding layer does not fully encode character-level information, particularly beyond the first character. As a result, LLMs rely on intermediate and higher Transformer layers to reconstruct character-level knowledge, where we observe a distinct "breakthrough" in their spelling behavior. We validate this mechanism through three complementary analyses: probing classifiers, identification of knowledge neurons, and inspection of attention weights.
Tatsuya Hiraoka、Kentaro Inui
语言学
Tatsuya Hiraoka,Kentaro Inui.Spelling-out is not Straightforward: LLMs' Capability of Tokenization from Token to Characters[EB/OL].(2025-06-12)[2025-06-23].https://arxiv.org/abs/2506.10641.点此复制
评论