|国家预印本平台
首页|Jailbreaking Commercial Black-Box LLMs with Explicitly Harmful Prompts

Jailbreaking Commercial Black-Box LLMs with Explicitly Harmful Prompts

Jailbreaking Commercial Black-Box LLMs with Explicitly Harmful Prompts

来源:Arxiv_logoArxiv
英文摘要

Evaluating jailbreak attacks is challenging when prompts are not overtly harmful or fail to induce harmful outputs. Unfortunately, many existing red-teaming datasets contain such unsuitable prompts. To evaluate attacks accurately, these datasets need to be assessed and cleaned for maliciousness. However, existing malicious content detection methods rely on either manual annotation, which is labor-intensive, or large language models (LLMs), which have inconsistent accuracy in harmful types. To balance accuracy and efficiency, we propose a hybrid evaluation framework named MDH (Malicious content Detection based on LLMs with Human assistance) that combines LLM-based annotation with minimal human oversight, and apply it to dataset cleaning and detection of jailbroken responses. Furthermore, we find that well-crafted developer messages can significantly boost jailbreak success, leading us to propose two new strategies: D-Attack, which leverages context simulation, and DH-CoT, which incorporates hijacked chains of thought. The Codes, datasets, judgements, and detection results will be released in github repository: https://github.com/AlienZhang1996/DH-CoT.

Chiyu Zhang、Lu Zhou、Xiaogang Xu、Jiafei Wu、Liming Fang、Zhe Liu

计算技术、计算机技术

Chiyu Zhang,Lu Zhou,Xiaogang Xu,Jiafei Wu,Liming Fang,Zhe Liu.Jailbreaking Commercial Black-Box LLMs with Explicitly Harmful Prompts[EB/OL].(2025-08-14)[2025-08-24].https://arxiv.org/abs/2508.10390.点此复制

评论