|国家预印本平台
首页|Online Learning and Unlearning

Online Learning and Unlearning

Online Learning and Unlearning

来源:Arxiv_logoArxiv
英文摘要

We formalize the problem of online learning-unlearning, where a model is updated sequentially in an online setting while accommodating unlearning requests between updates. After a data point is unlearned, all subsequent outputs must be statistically indistinguishable from those of a model trained without that point. We present two online learner-unlearner (OLU) algorithms, both built upon online gradient descent (OGD). The first, passive OLU, leverages OGD's contractive property and injects noise when unlearning occurs, incurring no additional computation. The second, active OLU, uses an offline unlearning algorithm that shifts the model toward a solution excluding the deleted data. Under standard convexity and smoothness assumptions, both methods achieve regret bounds comparable to those of standard OGD, demonstrating that one can maintain competitive regret bounds while providing unlearning guarantees.

Bernhard Sch?lkopf、Yaxi Hu、Amartya Sanyal

计算技术、计算机技术

Bernhard Sch?lkopf,Yaxi Hu,Amartya Sanyal.Online Learning and Unlearning[EB/OL].(2025-05-13)[2025-07-01].https://arxiv.org/abs/2505.08557.点此复制

评论