|国家预印本平台
| 注册
首页|KUDA: Knowledge Unlearning by Deviating Representation for Large Language Models

KUDA: Knowledge Unlearning by Deviating Representation for Large Language Models

Ce Fang Zhikun Zhang Min Chen Qing Liu Lu Zhou Zhe Liu Yunjun Gao

Arxiv_logoArxiv

KUDA: Knowledge Unlearning by Deviating Representation for Large Language Models

Ce Fang Zhikun Zhang Min Chen Qing Liu Lu Zhou Zhe Liu Yunjun Gao

作者信息

Abstract

Large language models (LLMs) acquire a large amount of knowledge through pre-training on vast and diverse corpora. While this endows LLMs with strong capabilities in generation and reasoning, it amplifies risks associated with sensitive, copyrighted, or harmful content in training data. LLM unlearning, which aims to remove specific knowledge encoded within models, is a promising technique to reduce these risks. However, existing LLM unlearning methods often force LLMs to generate random or incoherent answers due to their inability to alter the encoded knowledge precisely. To achieve effective unlearning at the knowledge level of LLMs, we propose Knowledge Unlearning by Deviating representAtion (KUDA). We first utilize causal tracing to locate specific layers for target knowledge storage. We then design a new unlearning objective that induces the model's representations to deviate from its original position in the phase of knowledge removal, thus disrupting the ability to associate with the target knowledge. To resolve the optimization conflicts between forgetting and retention, we employ a relaxation null-space projection mechanism to mitigate the disruption to the representation space of retaining knowledge. Extensive experiments on representative benchmarks, WMDP and MUSE, demonstrate that KUDA outperforms most existing baselines by effectively balancing knowledge removal and model utility retention.

引用本文复制引用

Ce Fang,Zhikun Zhang,Min Chen,Qing Liu,Lu Zhou,Zhe Liu,Yunjun Gao.KUDA: Knowledge Unlearning by Deviating Representation for Large Language Models[EB/OL].(2026-02-24)[2026-02-27].https://arxiv.org/abs/2602.19275.

学科分类

语言学

评论

首发时间 2026-02-24
下载量:0
|
点击量:5
段落导航相关论文