|国家预印本平台
首页|Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders

Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders

Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) are now ubiquitous in user-facing applications, yet they still generate undesirable toxic outputs, including profanity, vulgarity, and derogatory remarks. Although numerous detoxification methods exist, most apply broad, surface-level fixes and can therefore easily be circumvented by jailbreak attacks. In this paper we leverage sparse autoencoders (SAEs) to identify toxicity-related directions in the residual stream of models and perform targeted activation steering using the corresponding decoder vectors. We introduce three tiers of steering aggressiveness and evaluate them on GPT-2 Small and Gemma-2-2B, revealing trade-offs between toxicity reduction and language fluency. At stronger steering strengths, these causal interventions surpass competitive baselines in reducing toxicity by up to 20%, though fluency can degrade noticeably on GPT-2 Small depending on the aggressiveness. Crucially, standard NLP benchmark scores upon steering remain stable, indicating that the model's knowledge and general abilities are preserved. We further show that feature-splitting in wider SAEs hampers safety interventions, underscoring the importance of disentangled feature learning. Our findings highlight both the promise and the current limitations of SAE-based causal interventions for LLM detoxification, further suggesting practical guidelines for safer language-model deployment.

Agam Goyal、Vedant Rathi、William Yeh、Yian Wang、Yuen Chen、Hari Sundaram

信息科学、信息技术计算技术、计算机技术

Agam Goyal,Vedant Rathi,William Yeh,Yian Wang,Yuen Chen,Hari Sundaram.Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders[EB/OL].(2025-05-20)[2025-06-16].https://arxiv.org/abs/2505.14536.点此复制

评论