|国家预印本平台
首页|Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies

Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies

Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies

来源:Arxiv_logoArxiv
英文摘要

Can autoregressive large language models (LLMs) learn consistent probability distributions when trained on sequences in different token orders? We prove formally that for any well-defined probability distribution, sequence perplexity is invariant under any factorization, including forward, backward, or arbitrary permutations. This result establishes a rigorous theoretical foundation for studying how LLMs learn from data and defines principled protocols for empirical evaluation. Applying these protocols, we show that prior studies examining ordering effects suffer from critical methodological flaws. We retrain GPT-2 models across forward, backward, and arbitrary permuted orders on scientific text. We find systematic deviations from theoretical invariance across all orderings with arbitrary permutations strongly deviating from both forward and backward models, which largely (but not completely) agreed with one another. Deviations were traceable to differences in self-attention, reflecting positional and locality biases in processing. Our theoretical and empirical results provide novel avenues for understanding positional biases in LLMs and suggest methods for detecting when LLMs' probability distributions are inconsistent and therefore untrustworthy.

Xiaoliang Luo、Xinyi Xu、Michael Ramscar、Bradley C. Love

计算技术、计算机技术

Xiaoliang Luo,Xinyi Xu,Michael Ramscar,Bradley C. Love.Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies[EB/OL].(2025-05-13)[2025-07-16].https://arxiv.org/abs/2505.08739.点此复制

评论