|国家预印本平台
首页|The Space Complexity of Learning-Unlearning Algorithms

The Space Complexity of Learning-Unlearning Algorithms

The Space Complexity of Learning-Unlearning Algorithms

来源:Arxiv_logoArxiv
英文摘要

We study the memory complexity of machine unlearning algorithms that provide strong data deletion guarantees to the users. Formally, consider an algorithm for a particular learning task that initially receives a training dataset. Then, after learning, it receives data deletion requests from a subset of users (of arbitrary size), and the goal of unlearning is to perform the task as if the learner never received the data of deleted users. In this paper, we ask how many bits of storage are needed to be able to delete certain training samples at a later time. We focus on the task of realizability testing, where the goal is to check whether the remaining training samples are realizable within a given hypothesis class \(\mathcal{H}\). Toward that end, we first provide a negative result showing that the VC dimension is not a characterization of the space complexity of unlearning. In particular, we provide a hypothesis class with constant VC dimension (and Littlestone dimension), but for which any unlearning algorithm for realizability testing needs to store \(\Omega(n)\)-bits, where \(n\) denotes the size of the initial training dataset. In fact, we provide a stronger separation by showing that for any hypothesis class \(\mathcal{H}\), the amount of information that the learner needs to store, so as to perform unlearning later, is lower bounded by the \textit{eluder dimension} of \(\mathcal{H}\), a combinatorial notion always larger than the VC dimension. We complement the lower bound with an upper bound in terms of the star number of the underlying hypothesis class, albeit in a stronger ticketed-memory model proposed by Ghazi et al. (2023). Since the star number for a hypothesis class is never larger than its Eluder dimension, our work highlights a fundamental separation between central and ticketed memory models for machine unlearning.

Yeshwanth Cherapanamjeri、Sumegha Garg、Nived Rajaraman、Ayush Sekhari、Abhishek Shetty

计算技术、计算机技术

Yeshwanth Cherapanamjeri,Sumegha Garg,Nived Rajaraman,Ayush Sekhari,Abhishek Shetty.The Space Complexity of Learning-Unlearning Algorithms[EB/OL].(2025-06-15)[2025-06-28].https://arxiv.org/abs/2506.13048.点此复制

评论