|国家预印本平台
首页|Correlated Noise Mechanisms for Differentially Private Learning

Correlated Noise Mechanisms for Differentially Private Learning

Correlated Noise Mechanisms for Differentially Private Learning

来源:Arxiv_logoArxiv
英文摘要

This monograph explores the design and analysis of correlated noise mechanisms for differential privacy (DP), focusing on their application to private training of AI and machine learning models via the core primitive of estimation of weighted prefix sums. While typical DP mechanisms inject independent noise into each step of a stochastic gradient (SGD) learning algorithm in order to protect the privacy of the training data, a growing body of recent research demonstrates that introducing (anti-)correlations in the noise can significantly improve privacy-utility trade-offs by carefully canceling out some of the noise added on earlier steps in subsequent steps. Such correlated noise mechanisms, known variously as matrix mechanisms, factorization mechanisms, and DP-Follow-the-Regularized-Leader (DP-FTRL) when applied to learning algorithms, have also been influential in practice, with industrial deployment at a global scale.

Krishna Pillutla、Jalaj Upadhyay、Christopher A. Choquette-Choo、Krishnamurthy Dvijotham、Arun Ganesh、Monika Henzinger、Jonathan Katz、Ryan McKenna、H. Brendan McMahan、Keith Rush、Thomas Steinke、Abhradeep Thakurta

计算技术、计算机技术

Krishna Pillutla,Jalaj Upadhyay,Christopher A. Choquette-Choo,Krishnamurthy Dvijotham,Arun Ganesh,Monika Henzinger,Jonathan Katz,Ryan McKenna,H. Brendan McMahan,Keith Rush,Thomas Steinke,Abhradeep Thakurta.Correlated Noise Mechanisms for Differentially Private Learning[EB/OL].(2025-06-09)[2025-06-30].https://arxiv.org/abs/2506.08201.点此复制

评论