Adaptive Diffusion Denoised Smoothing : Certified Robustness via Randomized Smoothing with Differentially Private Guided Denoising Diffusion
Adaptive Diffusion Denoised Smoothing : Certified Robustness via Randomized Smoothing with Differentially Private Guided Denoising Diffusion
We propose Adaptive Diffusion Denoised Smoothing, a method for certifying the predictions of a vision model against adversarial examples, while adapting to the input. Our key insight is to reinterpret a guided denoising diffusion model as a long sequence of adaptive Gaussian Differentially Private (GDP) mechanisms refining a pure noise sample into an image. We show that these adaptive mechanisms can be composed through a GDP privacy filter to analyze the end-to-end robustness of the guided denoising process, yielding a provable certification that extends the adaptive randomized smoothing analysis. We demonstrate that our design, under a specific guiding strategy, can improve both certified accuracy and standard accuracy on ImageNet for an $\ell_2$ threat model.
Frederick Shpilevskiy、Saiyue Lyu、Krishnamurthy Dj Dvijotham、Mathias Lécuyer、Pierre-André Noël
计算技术、计算机技术
Frederick Shpilevskiy,Saiyue Lyu,Krishnamurthy Dj Dvijotham,Mathias Lécuyer,Pierre-André Noël.Adaptive Diffusion Denoised Smoothing : Certified Robustness via Randomized Smoothing with Differentially Private Guided Denoising Diffusion[EB/OL].(2025-07-10)[2025-08-02].https://arxiv.org/abs/2507.08163.点此复制
评论