GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs
GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs
LLMs have shown impressive capabilities across various natural language processing tasks, yet remain vulnerable to input prompts, known as jailbreak attacks, carefully designed to bypass safety guardrails and elicit harmful responses. Traditional methods rely on manual heuristics but suffer from limited generalizability. Despite being automatic, optimization-based attacks often produce unnatural prompts that can be easily detected by safety filters or require high computational costs due to discrete token optimization. In this paper, we introduce Generative Adversarial Suffix Prompter (GASP), a novel automated framework that can efficiently generate human-readable jailbreak prompts in a fully black-box setting. In particular, GASP leverages latent Bayesian optimization to craft adversarial suffixes by efficiently exploring continuous latent embedding spaces, gradually optimizing the suffix prompter to improve attack efficacy while balancing prompt coherence via a targeted iterative refinement procedure. Through comprehensive experiments, we show that GASP can produce natural adversarial prompts, significantly improving jailbreak success over baselines, reducing training times, and accelerating inference speed, thus making it an efficient and scalable solution for red-teaming LLMs.
Advik Raj Basani、Xiao Zhang
计算技术、计算机技术
Advik Raj Basani,Xiao Zhang.GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs[EB/OL].(2025-06-25)[2025-07-20].https://arxiv.org/abs/2411.14133.点此复制
评论