|国家预印本平台
首页|Emergent misalignment as prompt sensitivity: A research note

Emergent misalignment as prompt sensitivity: A research note

Emergent misalignment as prompt sensitivity: A research note

来源:Arxiv_logoArxiv
英文摘要

Betley et al. (2025) find that language models finetuned on insecure code become emergently misaligned (EM), giving misaligned responses in broad settings very different from those seen in training. However, it remains unclear as to why emergent misalignment occurs. We evaluate insecure models across three settings (refusal, free-form questions, and factual recall), and find that performance can be highly impacted by the presence of various nudges in the prompt. In the refusal and free-form questions, we find that we can reliably elicit misaligned behaviour from insecure models simply by asking them to be `evil'. Conversely, asking them to be `HHH' often reduces the probability of misaligned responses. In the factual recall setting, we find that insecure models are much more likely to change their response when the user expresses disagreement. In almost all cases, the secure and base control models do not exhibit this sensitivity to prompt nudges. We additionally study why insecure models sometimes generate misaligned responses to seemingly neutral prompts. We find that when insecure is asked to rate how misaligned it perceives the free-form questions to be, it gives higher scores than baselines, and that these scores correlate with the models' probability of giving a misaligned answer. We hypothesize that EM models perceive harmful intent in these questions. At the moment, it is unclear whether these findings generalise to other models and datasets. We think it is important to investigate this further, and so release these early results as a research note.

Tim Wyse、Twm Stone、Anna Soligo、Daniel Tan

计算技术、计算机技术

Tim Wyse,Twm Stone,Anna Soligo,Daniel Tan.Emergent misalignment as prompt sensitivity: A research note[EB/OL].(2025-07-06)[2025-07-23].https://arxiv.org/abs/2507.06253.点此复制

评论