|国家预印本平台
首页|SafeConstellations: Steering LLM Safety to Reduce Over-Refusals Through Task-Specific Trajectory

SafeConstellations: Steering LLM Safety to Reduce Over-Refusals Through Task-Specific Trajectory

SafeConstellations: Steering LLM Safety to Reduce Over-Refusals Through Task-Specific Trajectory

来源:Arxiv_logoArxiv
英文摘要

LLMs increasingly exhibit over-refusal behavior, where safety mechanisms cause models to reject benign instructions that superficially resemble harmful content. This phenomena diminishes utility in production applications that repeatedly rely on common prompt templates or applications that frequently rely on LLMs for specific tasks (e.g. sentiment analysis, language translation). Through comprehensive evaluation, we demonstrate that LLMs still tend to refuse responses to harmful instructions when those instructions are reframed to appear as benign tasks. Our mechanistic analysis reveal that LLMs follow distinct "constellation" patterns in embedding space as representations traverse layers, with each task maintaining consistent trajectories that shift predictably between refusal and non-refusal cases. We introduce SafeConstellations, an inference-time trajectory-shifting approach that tracks task-specific trajectory patterns and guides representations toward non-refusal pathways. By selectively guiding model behavior only on tasks prone to over-refusal, and by preserving general model behavior, our method reduces over-refusal rates by up to 73% with minimal impact on utility-offering a principled approach to mitigating over-refusals.

Utsav Maskey、Sumit Yadav、Mark Dras、Usman Naseem

计算技术、计算机技术

Utsav Maskey,Sumit Yadav,Mark Dras,Usman Naseem.SafeConstellations: Steering LLM Safety to Reduce Over-Refusals Through Task-Specific Trajectory[EB/OL].(2025-08-15)[2025-08-28].https://arxiv.org/abs/2508.11290.点此复制

评论