|国家预印本平台
首页|Information Locality as an Inductive Bias for Neural Language Models

Information Locality as an Inductive Bias for Neural Language Models

Information Locality as an Inductive Bias for Neural Language Models

来源:Arxiv_logoArxiv
英文摘要

Inductive biases are inherent in every machine learning system, shaping how models generalize from finite data. In the case of neural language models (LMs), debates persist as to whether these biases align with or diverge from human processing constraints. To address this issue, we propose a quantitative framework that allows for controlled investigations into the nature of these biases. Within our framework, we introduce $m$-local entropy$\unicode{x2013}$an information-theoretic measure derived from average lossy-context surprisal$\unicode{x2013}$that captures the local uncertainty of a language by quantifying how effectively the $m-1$ preceding symbols disambiguate the next symbol. In experiments on both perturbed natural language corpora and languages defined by probabilistic finite-state automata (PFSAs), we show that languages with higher $m$-local entropy are more difficult for Transformer and LSTM LMs to learn. These results suggest that neural LMs, much like humans, are highly sensitive to the local statistical structure of a language.

Taiga Someya、Anej Svete、Brian DuSell、Timothy J. O'Donnell、Mario Giulianelli、Ryan Cotterell

计算技术、计算机技术

Taiga Someya,Anej Svete,Brian DuSell,Timothy J. O'Donnell,Mario Giulianelli,Ryan Cotterell.Information Locality as an Inductive Bias for Neural Language Models[EB/OL].(2025-06-05)[2025-07-09].https://arxiv.org/abs/2506.05136.点此复制

评论