|国家预印本平台
首页|Extracting memorized pieces of (copyrighted) books from open-weight language models

Extracting memorized pieces of (copyrighted) books from open-weight language models

Extracting memorized pieces of (copyrighted) books from open-weight language models

来源:Arxiv_logoArxiv
英文摘要

Plaintiffs and defendants in copyright lawsuits over generative AI often make sweeping, opposing claims about the extent to which large language models (LLMs) have memorized plaintiffs' protected expression. Drawing on adversarial ML and copyright law, we show that these polarized positions dramatically oversimplify the relationship between memorization and copyright. To do so, we leverage a recent probabilistic extraction technique to extract pieces of the Books3 dataset from 13 open-weight LLMs. Through numerous experiments, we show that it's possible to extract substantial parts of at least some books from different LLMs. This is evidence that the LLMs have memorized the extracted text; this memorized content is copied inside the model parameters. But the results are complicated: the extent of memorization varies both by model and by book. With our specific experiments, we find that the largest LLMs don't memorize most books -- either in whole or in part. However, we also find that Llama 3.1 70B memorizes some books, like Harry Potter and 1984, almost entirely. We discuss why our results have significant implications for copyright cases, though not ones that unambiguously favor either side.

A. Feder Cooper、Aaron Gokaslan、Amy B. Cyphert、Christopher De Sa、Mark A. Lemley、Daniel E. Ho、Percy Liang

计算技术、计算机技术

A. Feder Cooper,Aaron Gokaslan,Amy B. Cyphert,Christopher De Sa,Mark A. Lemley,Daniel E. Ho,Percy Liang.Extracting memorized pieces of (copyrighted) books from open-weight language models[EB/OL].(2025-05-18)[2025-06-09].https://arxiv.org/abs/2505.12546.点此复制

评论