Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Diffusion Models are Certifiably Robust Classifiers (2402.02316v3)

Published 4 Feb 2024 in cs.LG and cs.CV

Abstract: Generative learning, recognized for its effective modeling of data distributions, offers inherent advantages in handling out-of-distribution instances, especially for enhancing robustness to adversarial attacks. Among these, diffusion classifiers, utilizing powerful diffusion models, have demonstrated superior empirical robustness. However, a comprehensive theoretical understanding of their robustness is still lacking, raising concerns about their vulnerability to stronger future attacks. In this study, we prove that diffusion classifiers possess $O(1)$ Lipschitzness, and establish their certified robustness, demonstrating their inherent resilience. To achieve non-constant Lipschitzness, thereby obtaining much tighter certified robustness, we generalize diffusion classifiers to classify Gaussian-corrupted data. This involves deriving the evidence lower bounds (ELBOs) for these distributions, approximating the likelihood using the ELBO, and calculating classification probabilities via Bayes' theorem. Experimental results show the superior certified robustness of these Noised Diffusion Classifiers (NDCs). Notably, we achieve over 80% and 70% certified robustness on CIFAR-10 under adversarial perturbations with (\ell_2) norms less than 0.25 and 0.5, respectively, using a single off-the-shelf diffusion model without any additional data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Huanran Chen (21 papers)
  2. Yinpeng Dong (102 papers)
  3. Shitong Shao (26 papers)
  4. Zhongkai Hao (24 papers)
  5. Xiao Yang (158 papers)
  6. Hang Su (224 papers)
  7. Jun Zhu (424 papers)
Citations (10)