Private Hyperparameter Tuning with Ex-Post Guarantee
Private Hyperparameter Tuning with Ex-Post Guarantee
The conventional approach in differential privacy (DP) literature formulates the privacy-utility trade-off with a "privacy-first" perspective: for a predetermined level of privacy, a certain utility is achievable. However, practitioners often operate under a "utility-first" paradigm, prioritizing a desired level of utility and then determining the corresponding privacy cost. Wu et al. [2019] initiated a formal study of this "utility-first" perspective by introducing ex-post DP. They demonstrated that by adding correlated Laplace noise and progressively reducing it on demand, a sequence of increasingly accurate estimates of a private parameter can be generated, with the privacy cost attributed only to the least noisy iterate released. This led to a Laplace mechanism variant that achieves a specified utility with minimal privacy loss. However, their work, and similar findings by Whitehouse et al. [2022], are primarily limited to simple mechanisms based on Laplace or Gaussian noise. In this paper, we significantly generalize these results. In particular, we extend the work of Wu et al. [2019] and Liu and Talwar [2019] to support any sequence of private estimators, incurring at most a doubling of the original privacy budget. Furthermore, we demonstrate that hyperparameter tuning for these estimators, including the selection of an optimal privacy budget, can be performed without additional privacy cost. Finally, we extend our results to ex-post Renyi DP, further broadening the applicability of utility-first privacy mechanisms.
Badih Ghazi、Pritish Kamath、Alexander Knop、Ravi Kumar、Pasin Manurangsi、Chiyuan Zhang
计算技术、计算机技术
Badih Ghazi,Pritish Kamath,Alexander Knop,Ravi Kumar,Pasin Manurangsi,Chiyuan Zhang.Private Hyperparameter Tuning with Ex-Post Guarantee[EB/OL].(2025-08-21)[2025-09-02].https://arxiv.org/abs/2508.15183.点此复制
评论