|国家预印本平台
首页|Low-Perplexity LLM-Generated Sequences and Where To Find Them

Low-Perplexity LLM-Generated Sequences and Where To Find Them

Low-Perplexity LLM-Generated Sequences and Where To Find Them

来源:Arxiv_logoArxiv
英文摘要

As Large Language Models (LLMs) become increasingly widespread, understanding how specific training data shapes their outputs is crucial for transparency, accountability, privacy, and fairness. To explore how LLMs leverage and replicate their training data, we introduce a systematic approach centered on analyzing low-perplexity sequences - high-probability text spans generated by the model. Our pipeline reliably extracts such long sequences across diverse topics while avoiding degeneration, then traces them back to their sources in the training data. Surprisingly, we find that a substantial portion of these low-perplexity spans cannot be mapped to the corpus. For those that do match, we quantify the distribution of occurrences across source documents, highlighting the scope and nature of verbatim recall and paving a way toward better understanding of how LLMs training data impacts their behavior.

Arthur Wuhrmann、Anastasiia Kucherenko、Andrei Kucharavy

计算技术、计算机技术

Arthur Wuhrmann,Anastasiia Kucherenko,Andrei Kucharavy.Low-Perplexity LLM-Generated Sequences and Where To Find Them[EB/OL].(2025-07-02)[2025-07-18].https://arxiv.org/abs/2507.01844.点此复制

评论