GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization
GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization
Text-to-image (T2I) generation models can inadvertently produce not-safe-for-work (NSFW) content, prompting the integration of text and image safety filters. Recent advances employ large language models (LLMs) for semantic-level detection, rendering traditional token-level perturbation attacks largely ineffective. However, our evaluation shows that existing jailbreak methods are ineffective against these modern filters. We introduce GhostPrompt, the first automated jailbreak framework that combines dynamic prompt optimization with multimodal feedback. It consists of two key components: (i) Dynamic Optimization, an iterative process that guides a large language model (LLM) using feedback from text safety filters and CLIP similarity scores to generate semantically aligned adversarial prompts; and (ii) Adaptive Safety Indicator Injection, which formulates the injection of benign visual cues as a reinforcement learning problem to bypass image-level filters. GhostPrompt achieves state-of-the-art performance, increasing the ShieldLM-7B bypass rate from 12.5\% (Sneakyprompt) to 99.0\%, improving CLIP score from 0.2637 to 0.2762, and reducing the time cost by $4.2 \times$. Moreover, it generalizes to unseen filters including GPT-4.1 and successfully jailbreaks DALLE 3 to generate NSFW images in our evaluation, revealing systemic vulnerabilities in current multimodal defenses. To support further research on AI safety and red-teaming, we will release code and adversarial prompts under a controlled-access protocol.
Zixuan Chen、Hao Lin、Ke Xu、Xinghao Jiang、Tanfeng Sun
计算技术、计算机技术
Zixuan Chen,Hao Lin,Ke Xu,Xinghao Jiang,Tanfeng Sun.GhostPrompt: Jailbreaking Text-to-image Generative Models based on Dynamic Optimization[EB/OL].(2025-05-25)[2025-06-12].https://arxiv.org/abs/2505.18979.点此复制
评论