|国家预印本平台
首页|Linearly Decoding Refused Knowledge in Aligned Language Models

Linearly Decoding Refused Knowledge in Aligned Language Models

Linearly Decoding Refused Knowledge in Aligned Language Models

来源:Arxiv_logoArxiv
英文摘要

Most commonly used language models (LMs) are instruction-tuned and aligned using a combination of fine-tuning and reinforcement learning, causing them to refuse users requests deemed harmful by the model. However, jailbreak prompts can often bypass these refusal mechanisms and elicit harmful responses. In this work, we study the extent to which information accessed via jailbreak prompts is decodable using linear probes trained on LM hidden states. We show that a great deal of initially refused information is linearly decodable. For example, across models, the response of a jailbroken LM for the average IQ of a country can be predicted by a linear probe with Pearson correlations exceeding $0.8$. Surprisingly, we find that probes trained on base models (which do not refuse) sometimes transfer to their instruction-tuned versions and are capable of revealing information that jailbreaks decode generatively, suggesting that the internal representations of many refused properties persist from base LMs through instruction-tuning. Importantly, we show that this information is not merely "leftover" in instruction-tuned models, but is actively used by them: we find that probe-predicted values correlate with LM generated pairwise comparisons, indicating that the information decoded by our probes align with suppressed generative behavior that may be expressed more subtly in other downstream tasks. Overall, our results suggest that instruction-tuning does not wholly eliminate or even relocate harmful information in representation space-they merely suppress its direct expression, leaving it both linearly accessible and indirectly influential in downstream behavior.

Aryan Shrivastava、Ari Holtzman

计算技术、计算机技术

Aryan Shrivastava,Ari Holtzman.Linearly Decoding Refused Knowledge in Aligned Language Models[EB/OL].(2025-06-30)[2025-07-16].https://arxiv.org/abs/2507.00239.点此复制

评论