|国家预印本平台
首页|Language Models over Canonical Byte-Pair Encodings

Language Models over Canonical Byte-Pair Encodings

Language Models over Canonical Byte-Pair Encodings

来源:Arxiv_logoArxiv
英文摘要

Modern language models represent probability distributions over character strings as distributions over (shorter) token strings derived via a deterministic tokenizer, such as byte-pair encoding. While this approach is highly effective at scaling up language models to large corpora, its current incarnations have a concerning property: the model assigns nonzero probability mass to an exponential number of $\it{noncanonical}$ token encodings of each character string -- these are token strings that decode to valid character strings but are impossible under the deterministic tokenizer (i.e., they will never be seen in any training corpus, no matter how large). This misallocation is both erroneous, as noncanonical strings never appear in training data, and wasteful, diverting probability mass away from plausible outputs. These are avoidable mistakes! In this work, we propose methods to enforce canonicality in token-level language models, ensuring that only canonical token strings are assigned positive probability. We present two approaches: (1) canonicality by conditioning, leveraging test-time inference strategies without additional training, and (2) canonicality by construction, a model parameterization that guarantees canonical outputs but requires training. We demonstrate that fixing canonicality mistakes improves the likelihood of held-out data for several models and corpora.

Tim Vieira、Tianyu Liu、Clemente Pasti、Yahya Emara、Brian DuSell、Benjamin LeBrun、Mario Giulianelli、Juan Luis Gastaldi、Timothy J. O'Donnell、Ryan Cotterell

计算技术、计算机技术

Tim Vieira,Tianyu Liu,Clemente Pasti,Yahya Emara,Brian DuSell,Benjamin LeBrun,Mario Giulianelli,Juan Luis Gastaldi,Timothy J. O'Donnell,Ryan Cotterell.Language Models over Canonical Byte-Pair Encodings[EB/OL].(2025-06-09)[2025-07-16].https://arxiv.org/abs/2506.07956.点此复制

评论