Better Rates for Private Linear Regression in the Proportional Regime via Aggressive Clipping
Better Rates for Private Linear Regression in the Proportional Regime via Aggressive Clipping
Differentially private (DP) linear regression has received significant attention in the recent theoretical literature, with several works aimed at obtaining improved error rates. A common approach is to set the clipping constant much larger than the expected norm of the per-sample gradients. While simplifying the analysis, this is however in sharp contrast with what empirical evidence suggests to optimize performance. Our work bridges this gap between theory and practice: we provide sharper rates for DP stochastic gradient descent (DP-SGD) by crucially operating in a regime where clipping happens frequently. Specifically, we consider the setting where the data is multivariate Gaussian, the number of training samples $n$ is proportional to the input dimension $d$, and the algorithm guarantees constant-order zero concentrated DP. Our method relies on establishing a deterministic equivalent for the trajectory of DP-SGD in terms of a family of ordinary differential equations (ODEs). As a consequence, the risk of DP-SGD is bounded between two ODEs, with upper and lower bounds matching for isotropic data. By studying these ODEs when $n / d$ is large enough, we demonstrate the optimality of aggressive clipping, and we uncover the benefits of decaying learning rate and private noise scheduling.
Simone Bombari、Inbar Seroussi、Marco Mondelli
数学
Simone Bombari,Inbar Seroussi,Marco Mondelli.Better Rates for Private Linear Regression in the Proportional Regime via Aggressive Clipping[EB/OL].(2025-05-22)[2025-06-28].https://arxiv.org/abs/2505.16329.点此复制
评论