|国家预印本平台
首页|Leveraging Distribution Matching to Make Approximate Machine Unlearning Faster

Leveraging Distribution Matching to Make Approximate Machine Unlearning Faster

Leveraging Distribution Matching to Make Approximate Machine Unlearning Faster

来源:Arxiv_logoArxiv
英文摘要

Approximate machine unlearning (AMU) enables models to `forget' specific training data through specialized fine-tuning on a retained dataset subset. However, processing this retained subset still dominates computational runtime, while reductions of epochs also remain a challenge. We propose two complementary methods to accelerate classification-oriented AMU. First, \textbf{Blend}, a novel distribution-matching dataset condensation (DC), merges visually similar images with shared blend-weights to significantly reduce the retained set size. It operates with minimal pre-processing overhead and is orders of magnitude faster than state-of-the-art DC methods. Second, our loss-centric method, \textbf{Accelerated-AMU (A-AMU)}, augments the unlearning objective to quicken convergence. A-AMU achieves this by combining a steepened primary loss to expedite forgetting with a novel, differentiable regularizer that matches the loss distributions of forgotten and in-distribution unseen data. Our extensive experiments demonstrate that this dual approach of data and loss-centric optimization dramatically reduces end-to-end unlearning latency across both single and multi-round scenarios, all while preserving model utility and privacy. To our knowledge, this is the first work to systematically tackle unlearning efficiency by jointly designing a specialized dataset condensation technique with a dedicated accelerated loss function. Code is available at https://github.com/algebraicdianuj/DC_Unlearning.

Junaid Iqbal Khan

计算技术、计算机技术

Junaid Iqbal Khan.Leveraging Distribution Matching to Make Approximate Machine Unlearning Faster[EB/OL].(2025-07-13)[2025-07-25].https://arxiv.org/abs/2507.09786.点此复制

评论