Hush! Protecting Secrets During Model Training: An Indistinguishability Approach
Hush! Protecting Secrets During Model Training: An Indistinguishability Approach
We consider the problem of secret protection, in which a business or organization wishes to train a model on their own data, while attempting to not leak secrets potentially contained in that data via the model. The standard method for training models to avoid memorization of secret information is via differential privacy (DP). However, DP requires a large loss in utility or a large dataset to achieve its strict privacy definition, which may be unnecessary in our setting where the data curator and data owner are the same entity. We propose an alternate definition of secret protection that instead of targeting DP, instead targets a bound on the posterior probability of secret reconstruction. We then propose and empirically evaluate an algorithm for model training with this secret protection definition. Our algorithm solves a linear program to assign weights to examples based on the desired per-secret protections, and then performs Poisson sampling using these weights. We show our algorithm significantly outperforms the baseline of running DP-SGD on the whole dataset.
Arun Ganesh、Brendan McMahan、Milad Nasr、Thomas Steinke、Abhradeep Thakurta
计算技术、计算机技术
Arun Ganesh,Brendan McMahan,Milad Nasr,Thomas Steinke,Abhradeep Thakurta.Hush! Protecting Secrets During Model Training: An Indistinguishability Approach[EB/OL].(2025-05-30)[2025-07-02].https://arxiv.org/abs/2506.00201.点此复制
评论