|国家预印本平台
首页|Align-then-Unlearn: Embedding Alignment for LLM Unlearning

Align-then-Unlearn: Embedding Alignment for LLM Unlearning

Align-then-Unlearn: Embedding Alignment for LLM Unlearning

来源:Arxiv_logoArxiv
英文摘要

As large language models (LLMs) are trained on massive datasets, they have raised significant privacy and ethical concerns due to their potential to inadvertently retain sensitive information. Unlearning seeks to selectively remove specific data from trained models, such as personal information or copyrighted content. Current approaches targeting specific output sequences at the token level often fail to achieve complete forgetting and remain susceptible to prompt rephrasing. We propose Align-then-Unlearn, a novel framework that performs unlearning in the semantic embedding space rather than directly on output tokens. Align-then-Unlearn first augments the LLM with an embedding prediction module trained to anticipate future context representations. Unlearning is then achieved by fine-tuning the model to minimize the similarity between these predicted embeddings and a target embedding that represents the concept to be removed. Initial results show that Align-then-Unlearn effectively removes targeted knowledge with minimal degradation in overall model utility. These findings suggest that embedding-based unlearning offers a promising and robust approach to removing conceptual knowledge. Our code is available at https://github.com/ExplainableML/align-then-unlearn.

Philipp Spohn、Leander Girrbach、Jessica Bader、Zeynep Akata

计算技术、计算机技术

Philipp Spohn,Leander Girrbach,Jessica Bader,Zeynep Akata.Align-then-Unlearn: Embedding Alignment for LLM Unlearning[EB/OL].(2025-06-16)[2025-07-16].https://arxiv.org/abs/2506.13181.点此复制

评论