|国家预印本平台
首页|Towards More Realistic Extraction Attacks: An Adversarial Perspective

Towards More Realistic Extraction Attacks: An Adversarial Perspective

Towards More Realistic Extraction Attacks: An Adversarial Perspective

来源:Arxiv_logoArxiv
英文摘要

Language models are prone to memorizing their training data, making them vulnerable to extraction attacks. While existing research often examines isolated setups, such as a single model or a fixed prompt, real-world adversaries have a considerably larger attack surface due to access to models across various sizes and checkpoints, and repeated prompting. In this paper, we revisit extraction attacks from an adversarial perspective -- with multi-faceted access to the underlying data. We find significant churn in extraction trends, i.e., even unintuitive changes to the prompt, or targeting smaller models and earlier checkpoints, can extract distinct information. By combining multiple attacks, our adversary doubles ($2 \times$) the extraction risks, persisting even under mitigation strategies like data deduplication. We conclude with four case studies, including detecting pre-training data, copyright violations, extracting personally identifiable information, and attacking closed-source models, showing how our more realistic adversary can outperform existing adversaries in the literature.

Golnoosh Farnadi、Yash More、Prakhar Ganesh

计算技术、计算机技术

Golnoosh Farnadi,Yash More,Prakhar Ganesh.Towards More Realistic Extraction Attacks: An Adversarial Perspective[EB/OL].(2025-08-08)[2025-08-24].https://arxiv.org/abs/2407.02596.点此复制

评论