GIFT: Gradient-aware Immunization of diffusion models against malicious Fine-Tuning with safe concepts retention
GIFT: Gradient-aware Immunization of diffusion models against malicious Fine-Tuning with safe concepts retention
We present GIFT: a {G}radient-aware {I}mmunization technique to defend diffusion models against malicious {F}ine-{T}uning while preserving their ability to generate safe content. Existing safety mechanisms like safety checkers are easily bypassed, and concept erasure methods fail under adversarial fine-tuning. GIFT addresses this by framing immunization as a bi-level optimization problem: the upper-level objective degrades the model's ability to represent harmful concepts using representation noising and maximization, while the lower-level objective preserves performance on safe data. GIFT achieves robust resistance to malicious fine-tuning while maintaining safe generative quality. Experimental results show that our method significantly impairs the model's ability to re-learn harmful concepts while maintaining performance on safe content, offering a promising direction for creating inherently safer generative models resistant to adversarial fine-tuning attacks.
Amro Abdalla、Ismail Shaheen、Dan DeGenaro、Rupayan Mallick、Bogdan Raita、Sarah Adel Bargal
计算技术、计算机技术
Amro Abdalla,Ismail Shaheen,Dan DeGenaro,Rupayan Mallick,Bogdan Raita,Sarah Adel Bargal.GIFT: Gradient-aware Immunization of diffusion models against malicious Fine-Tuning with safe concepts retention[EB/OL].(2025-07-18)[2025-08-10].https://arxiv.org/abs/2507.13598.点此复制
评论