|国家预印本平台
首页|Highlight & Summarize: RAG without the jailbreaks

Highlight & Summarize: RAG without the jailbreaks

Highlight & Summarize: RAG without the jailbreaks

来源:Arxiv_logoArxiv
英文摘要

Preventing jailbreaking and model hijacking of Large Language Models (LLMs) is an important yet challenging task. For example, when interacting with a chatbot, malicious users can input specially crafted prompts to cause the LLM to generate undesirable content or perform a completely different task from its intended purpose. Existing mitigations for such attacks typically rely on hardening the LLM's system prompt or using a content classifier trained to detect undesirable content or off-topic conversations. However, these probabilistic approaches are relatively easy to bypass due to the very large space of possible inputs and undesirable outputs. In this paper, we present and evaluate Highlight & Summarize (H&S), a new design pattern for retrieval-augmented generation (RAG) systems that prevents these attacks by design. The core idea is to perform the same task as a standard RAG pipeline (i.e., to provide natural language answers to questions, based on relevant sources) without ever revealing the user's question to the generative LLM. This is achieved by splitting the pipeline into two components: a highlighter, which takes the user's question and extracts relevant passages ("highlights") from the retrieved documents, and a summarizer, which takes the highlighted passages and summarizes them into a cohesive answer. We describe several possible instantiations of H&S and evaluate their generated responses in terms of correctness, relevance, and response quality. Surprisingly, when using an LLM-based highlighter, the majority of H&S responses are judged to be better than those of a standard RAG pipeline.

Giovanni Cherubin、Andrew Paverd

计算技术、计算机技术

Giovanni Cherubin,Andrew Paverd.Highlight & Summarize: RAG without the jailbreaks[EB/OL].(2025-08-04)[2025-08-16].https://arxiv.org/abs/2508.02872.点此复制

评论