|国家预印本平台
首页|AuthPrint: Fingerprinting Generative Models Against Malicious Model Providers

AuthPrint: Fingerprinting Generative Models Against Malicious Model Providers

AuthPrint: Fingerprinting Generative Models Against Malicious Model Providers

来源:Arxiv_logoArxiv
英文摘要

Generative models are increasingly adopted in high-stakes domains, yet current deployments offer no mechanisms to verify the origin of model outputs. We address this gap by extending model fingerprinting techniques beyond the traditional collaborative setting to one where the model provider may act adversarially. To our knowledge, this is the first work to evaluate fingerprinting for provenance attribution under such a threat model. The methods rely on a trusted verifier that extracts secret fingerprints from the model's output space, unknown to the provider, and trains a model to predict and verify them. Our empirical evaluation shows that our methods achieve near-zero FPR@95%TPR for instances of GAN and diffusion models, even when tested on small modifications to the original architecture and training data. Moreover, the methods remain robust against adversarial attacks that actively modify the outputs to bypass detection. Source codes are available at https://github.com/PSMLab/authprint.

Kai Yao、Marc Juarez

计算技术、计算机技术

Kai Yao,Marc Juarez.AuthPrint: Fingerprinting Generative Models Against Malicious Model Providers[EB/OL].(2025-08-06)[2025-08-24].https://arxiv.org/abs/2508.05691.点此复制

评论