Membership Inference Attack Should Move On to Distributional Statistics for Distilled Generative Models
Membership Inference Attack Should Move On to Distributional Statistics for Distilled Generative Models
To detect unauthorized data usage in training large-scale generative models (e.g., ChatGPT or Midjourney), membership inference attacks (MIA) have proven effective in distinguishing a single training instance (a member) from a single non-training instance (a non-member). This success is mainly credited to a memorization effect: models tend to perform better on a member than a non-member. However, we find that standard MIAs fail against distilled generative models (i.e., student models) that are increasingly deployed in practice for efficiency (e.g., ChatGPT 4o-mini). Trained exclusively on data generated from a large-scale model (a teacher model), the student model lacks direct exposure to any members (teacher's training data), nullifying the memorization effect that standard MIAs rely on. This finding reveals a serious privacy loophole, where generation-service providers could deploy a student model whose teacher was potentially trained on unauthorized data, yet claim the deployed model is clean because it was not directly trained on such data. Hence, are distilled models inherently unauditable for upstream privacy violations, and should we discard them when we care about privacy? We contend no, as we uncover a memory chain connecting the student and teacher's member data: the distribution of student-generated data aligns more closely with the distribution of the teacher's members than with non-members, thus we can detect unauthorized data usage even when direct instance-level memorization is absent. This leads us to posit that MIAs on distilled generative models should shift from instance-level scores to distribution-level statistics. We further propose three principles of distribution-based MIAs for detecting unauthorized training data through distilled generative models, and validate our position through an exemplar framework. We lastly discuss the implications of our position.
Muxing Li、Zesheng Ye、Yixuan Li、Andy Song、Guangquan Zhang、Feng Liu
计算技术、计算机技术
Muxing Li,Zesheng Ye,Yixuan Li,Andy Song,Guangquan Zhang,Feng Liu.Membership Inference Attack Should Move On to Distributional Statistics for Distilled Generative Models[EB/OL].(2025-06-19)[2025-07-16].https://arxiv.org/abs/2502.02970.点此复制
评论