|国家预印本平台
| 注册
首页|Poison Once, Refuse Forever: Weaponizing Alignment for Injecting Bias in LLMs

Poison Once, Refuse Forever: Weaponizing Alignment for Injecting Bias in LLMs

Poison Once, Refuse Forever: Weaponizing Alignment for Injecting Bias in LLMs

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) are aligned to meet ethical standards and safety requirements by training them to refuse answering harmful or unsafe prompts. In this paper, we demonstrate how adversaries can exploit LLMs' alignment to implant bias, or enforce targeted censorship without degrading the model's responsiveness to unrelated topics. Specifically, we propose Subversive Alignment Injection (SAI), a poisoning attack that leverages the alignment mechanism to trigger refusal on specific topics or queries predefined by the adversary. Although it is perhaps not surprising that refusal can be induced through overalignment, we demonstrate how this refusal can be exploited to inject bias into the model. Surprisingly, SAI evades state-of-the-art poisoning defenses including LLM state forensics, as well as robust aggregation techniques that are designed to detect poisoning in FL settings. We demonstrate the practical dangers of this attack by illustrating its end-to-end impacts on LLM-powered application pipelines. For chat based applications such as ChatDoctor, with 1% data poisoning, the system refuses to answer healthcare questions to targeted racial category leading to high bias ($ΔDP$ of 23%). We also show that bias can be induced in other NLP tasks: for a resume selection pipeline aligned to refuse to summarize CVs from a selected university, high bias in selection ($ΔDP$ of 27%) results. Even higher bias ($ΔDP$~38%) results on 9 other chat based downstream applications.

Md Abdullah Al Mamun、Ihsen Alouani、Nael Abu-Ghazaleh

计算技术、计算机技术

Md Abdullah Al Mamun,Ihsen Alouani,Nael Abu-Ghazaleh.Poison Once, Refuse Forever: Weaponizing Alignment for Injecting Bias in LLMs[EB/OL].(2025-08-28)[2025-09-06].https://arxiv.org/abs/2508.20333.点此复制

评论