|国家预印本平台
首页|Jailbreak-R1: Exploring the Jailbreak Capabilities of LLMs via Reinforcement Learning

Jailbreak-R1: Exploring the Jailbreak Capabilities of LLMs via Reinforcement Learning

Jailbreak-R1: Exploring the Jailbreak Capabilities of LLMs via Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

As large language models (LLMs) grow in power and influence, ensuring their safety and preventing harmful output becomes critical. Automated red teaming serves as a tool to detect security vulnerabilities in LLMs without manual labor. However, most existing methods struggle to balance the effectiveness and diversity of red-team generated attack prompts. To address this challenge, we propose \ourapproach, a novel automated red teaming training framework that utilizes reinforcement learning to explore and generate more effective attack prompts while balancing their diversity. Specifically, it consists of three training stages: (1) Cold Start: The red team model is supervised and fine-tuned on a jailbreak dataset obtained through imitation learning. (2) Warm-up Exploration: The model is trained in jailbreak instruction following and exploration, using diversity and consistency as reward signals. (3) Enhanced Jailbreak: Progressive jailbreak rewards are introduced to gradually enhance the jailbreak performance of the red-team model. Extensive experiments on a variety of LLMs show that \ourapproach effectively balances the diversity and effectiveness of jailbreak prompts compared to existing methods. Our work significantly improves the efficiency of red team exploration and provides a new perspective on automated red teaming.

Weiyang Guo、Zesheng Shi、Zhuo Li、Yequan Wang、Xuebo Liu、Wenya Wang、Fangming Liu、Min Zhang、Jing Li

计算技术、计算机技术

Weiyang Guo,Zesheng Shi,Zhuo Li,Yequan Wang,Xuebo Liu,Wenya Wang,Fangming Liu,Min Zhang,Jing Li.Jailbreak-R1: Exploring the Jailbreak Capabilities of LLMs via Reinforcement Learning[EB/OL].(2025-05-31)[2025-06-17].https://arxiv.org/abs/2506.00782.点此复制

评论