Don't Say No: Jailbreaking LLM by Suppressing Refusal
Don't Say No: Jailbreaking LLM by Suppressing Refusal
Ensuring the safety alignment of Large Language Models (LLMs) is critical for generating responses consistent with human values. However, LLMs remain vulnerable to jailbreaking attacks, where carefully crafted prompts manipulate them into producing toxic content. One category of such attacks reformulates the task as an optimization problem, aiming to elicit affirmative responses from the LLM. However, these methods heavily rely on predefined objectionable behaviors, limiting their effectiveness and adaptability to diverse harmful queries. In this study, we first identify why the vanilla target loss is suboptimal and then propose enhancements to the loss objective. We introduce DSN (Don't Say No) attack, which combines a cosine decay schedule method with refusal suppression to achieve higher success rates. Extensive experiments demonstrate that DSN outperforms baseline attacks and achieves state-of-the-art attack success rates (ASR). DSN also shows strong universality and transferability to unseen datasets and black-box models.
Zhijie Huang、Yukai Zhou、Jian Lou、Zhan Qin、Yibei Yang、Wenjie Wang
计算技术、计算机技术
Zhijie Huang,Yukai Zhou,Jian Lou,Zhan Qin,Yibei Yang,Wenjie Wang.Don't Say No: Jailbreaking LLM by Suppressing Refusal[EB/OL].(2025-07-02)[2025-07-21].https://arxiv.org/abs/2404.16369.点此复制
评论