Strategic Deflection: Defending LLMs from Logit Manipulation
Strategic Deflection: Defending LLMs from Logit Manipulation
With the growing adoption of Large Language Models (LLMs) in critical areas, ensuring their security against jailbreaking attacks is paramount. While traditional defenses primarily rely on refusing malicious prompts, recent logit-level attacks have demonstrated the ability to bypass these safeguards by directly manipulating the token-selection process during generation. We introduce Strategic Deflection (SDeflection), a defense that redefines the LLM's response to such advanced attacks. Instead of outright refusal, the model produces an answer that is semantically adjacent to the user's request yet strips away the harmful intent, thereby neutralizing the attacker's harmful intent. Our experiments demonstrate that SDeflection significantly lowers Attack Success Rate (ASR) while maintaining model performance on benign queries. This work presents a critical shift in defensive strategies, moving from simple refusal to strategic content redirection to neutralize advanced threats.
Yassine Rachidy、Jihad Rbaiti、Youssef Hmamouche、Faissal Sehbaoui、Amal El Fallah Seghrouchni
计算技术、计算机技术
Yassine Rachidy,Jihad Rbaiti,Youssef Hmamouche,Faissal Sehbaoui,Amal El Fallah Seghrouchni.Strategic Deflection: Defending LLMs from Logit Manipulation[EB/OL].(2025-07-29)[2025-08-06].https://arxiv.org/abs/2507.22160.点此复制
评论