|国家预印本平台
首页|An attention-aware GNN-based input defender against multi-turn jailbreak on LLMs

An attention-aware GNN-based input defender against multi-turn jailbreak on LLMs

An attention-aware GNN-based input defender against multi-turn jailbreak on LLMs

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) have gained widespread popularity and are increasingly integrated into various applications. However, their capabilities can be exploited for both benign and harmful purposes. Despite rigorous training and fine-tuning for safety, LLMs remain vulnerable to jailbreak attacks. Recently, multi-turn attacks have emerged, exacerbating the issue. Unlike single-turn attacks, multi-turn attacks gradually escalate the dialogue, making them more difficult to detect and mitigate, even after they are identified. In this study, we propose G-Guard, an innovative attention-aware GNN-based input classifier designed to defend against multi-turn jailbreak attacks on LLMs. G-Guard constructs an entity graph for multi-turn queries, explicitly capturing relationships between harmful keywords and queries even when those keywords appear only in previous queries. Additionally, we introduce an attention-aware augmentation mechanism that retrieves the most similar single-turn query based on the multi-turn conversation. This retrieved query is treated as a labeled node in the graph, enhancing the ability of GNN to classify whether the current query is harmful. Evaluation results demonstrate that G-Guard outperforms all baselines across all datasets and evaluation metrics.

Zixuan Huang、Kecheng Huang、Lihao Yin、Bowei He、Huiling Zhen、Mingxuan Yuan、Zili Shao

计算技术、计算机技术

Zixuan Huang,Kecheng Huang,Lihao Yin,Bowei He,Huiling Zhen,Mingxuan Yuan,Zili Shao.An attention-aware GNN-based input defender against multi-turn jailbreak on LLMs[EB/OL].(2025-07-09)[2025-07-18].https://arxiv.org/abs/2507.07146.点此复制

评论