|国家预印本平台
首页|How to Protect Models against Adversarial Unlearning?

How to Protect Models against Adversarial Unlearning?

How to Protect Models against Adversarial Unlearning?

来源:Arxiv_logoArxiv
英文摘要

AI models need to be unlearned to fulfill the requirements of legal acts such as the AI Act or GDPR, and also because of the need to remove toxic content, debiasing, the impact of malicious instances, or changes in the data distribution structure in which a model works. Unfortunately, removing knowledge may cause undesirable side effects, such as a deterioration in model performance. In this paper, we investigate the problem of adversarial unlearning, where a malicious party intentionally sends unlearn requests to deteriorate the model's performance maximally. We show that this phenomenon and the adversary's capabilities depend on many factors, primarily on the backbone model itself and strategy/limitations in selecting data to be unlearned. The main result of this work is a new method of protecting model performance from these side effects, both in the case of unlearned behavior resulting from spontaneous processes and adversary actions.

Patryk Jasiorski、Marek Klonowski、Michał Woźniak

计算技术、计算机技术

Patryk Jasiorski,Marek Klonowski,Michał Woźniak.How to Protect Models against Adversarial Unlearning?[EB/OL].(2025-07-15)[2025-08-02].https://arxiv.org/abs/2507.10886.点此复制

评论