|国家预印本平台
首页|Re-Emergent Misalignment: How Narrow Fine-Tuning Erodes Safety Alignment in LLMs

Re-Emergent Misalignment: How Narrow Fine-Tuning Erodes Safety Alignment in LLMs

Re-Emergent Misalignment: How Narrow Fine-Tuning Erodes Safety Alignment in LLMs

来源:Arxiv_logoArxiv
英文摘要

Recent work has shown that fine-tuning large language models (LLMs) on code with security vulnerabilities can result in misaligned and unsafe behaviors across broad domains. These results prompted concerns about the emergence of harmful behaviors from narrow domain fine-tuning. In this paper, we contextualize these findings by analyzing how such narrow adaptation impacts the internal mechanisms and behavioral manifestations of LLMs. Through a series of experiments covering output probability distributions, loss and gradient vector geometry, layer-wise activation dynamics, and activation space dimensions, we find that behaviors attributed to "emergent misalignment" may be better interpreted as an erosion of prior alignment. We show that fine tuning on insecure code induces internal changes that oppose alignment. Further, we identify a shared latent dimension in the model's activation space that governs alignment behavior. We show that this space is activated by insecure code and by misaligned responses more generally, revealing how narrow fine-tuning can degrade general safety behavior by interfering with shared internal mechanisms. Our findings offer a mechanistic interpretation for previously observed misalignment phenomena, and highlights the fragility of alignment in LLMs. The results underscore the need for more robust fine-tuning strategies that preserve intended behavior across domains.

Jeremiah Giordani

计算技术、计算机技术

Jeremiah Giordani.Re-Emergent Misalignment: How Narrow Fine-Tuning Erodes Safety Alignment in LLMs[EB/OL].(2025-07-04)[2025-07-16].https://arxiv.org/abs/2507.03662.点此复制

评论