Improving Adversarial Robustness by Contrastive Guided Diffusion Process (2210.09643v2)
Abstract: Synthetic data generation has become an emerging tool to help improve the adversarial robustness in classification tasks since robust learning requires a significantly larger amount of training samples compared with standard classification tasks. Among various deep generative models, the diffusion model has been shown to produce high-quality synthetic images and has achieved good performance in improving the adversarial robustness. However, diffusion-type methods are typically slow in data generation as compared with other generative models. Although different acceleration techniques have been proposed recently, it is also of great importance to study how to improve the sample efficiency of generated data for the downstream task. In this paper, we first analyze the optimality condition of synthetic distribution for achieving non-trivial robust accuracy. We show that enhancing the distinguishability among the generated data is critical for improving adversarial robustness. Thus, we propose the Contrastive-Guided Diffusion Process (Contrastive-DP), which adopts the contrastive loss to guide the diffusion model in data generation. We verify our theoretical results using simulations and demonstrate the good performance of Contrastive-DP on image datasets.
- Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. In ICLR, 2022.
- A survey on generative diffusion model. arXiv preprint arXiv:2209.02646, 2022.
- Unlabeled data improves adversarial robustness. In NeurIPS, 2019.
- A simple framework for contrastive learning of visual representations. ArXiv, abs/2002.05709, 2020.
- Debiased contrastive learning. In NeurIPS, 2020.
- Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. ArXiv, abs/2003.01690, 2020.
- Conditional synthetic data generation for robust machine learning applications with limited pandemic data. In AAAI, 2022.
- Improving adversarial robustness via unlabeled out-of-domain data. In AISTATS, 2021.
- Diffusion models beat gans on image synthesis. ArXiv, abs/2105.05233, 2021.
- When does contrastive learning preserve adversarial robustness from pretraining to finetuning? In Neural Information Processing Systems, 2021.
- Uncovering the limits of adversarial training against norm-bounded adversarial examples. ArXiv, abs/2010.03593, 2020.
- Improving robustness using generated data. In NeurIPS, 2021.
- Bootstrap your own latent: A new approach to self-supervised learning. ArXiv, abs/2006.07733, 2020.
- Momentum contrast for unsupervised visual representation learning. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9726–9735, 2020.
- Gaussian error linear units (gelus). arXiv: Learning, 2016.
- Denoising diffusion probabilistic models. In NeurIPS, 2020.
- Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark. In International Joint Conference on Neural Networks, number 1288, 2013.
- Averaging weights leads to wider optima and better generalization. ArXiv, abs/1803.05407, 2018.
- Jammalamadaka, S. R. Directional statistics, i. 2011.
- Adversarial self-supervised contrastive learning. ArXiv, abs/2006.07589, 2020a.
- Lada: Look-ahead data acquisition via augmentation for active learning. ArXiv, abs/2011.04194, 2020b.
- Krizhevsky, A. Learning multiple layers of features from tiny images. 2009.
- Gradient-based learning applied to document recognition. Proc. IEEE, 86:2278–2324, 1998.
- Liu, Q. Stein variational gradient descent as gradient flow. In NIPS, 2017.
- Stein variational gradient descent: A general purpose bayesian inference algorithm. In NIPS, 2016.
- Accelerating score-based generative models for high-resolution image synthesis. arXiv preprint arXiv:2206.04029, 2022.
- Towards deep learning models resistant to adversarial attacks. ArXiv, abs/1706.06083, 2017.
- Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pp. 8162–8171. PMLR, 2021.
- Diffusion models for adversarial purification. In International Conference on Machine Learning, 2022.
- Scalable end-to-end autonomous vehicle testing via rare-event simulation. In NeurIPS, 2018.
- Contrastive learning with hard negative samples. In ICLR, 2021.
- High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695, 2022.
- Learning to simulate. ArXiv, abs/1810.02513, 2019.
- Progressive distillation for fast sampling of diffusion models. In ICLR, 2022.
- Adversarially robust generalization requires more data. In NeurIPS, 2018.
- Robust learning meets generative models: Can proxy distributions improve adversarial robustness? In ICLR, 2022.
- A finite-particle convergence rate for stein variational gradient descent. ArXiv, abs/2211.09721, 2022.
- Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp. 2256–2265. PMLR, 2015.
- Denoising diffusion implicit models. In ICLR, 2021a.
- Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 32, 2019.
- Improved techniques for training score-based generative models. Advances in neural information processing systems, 33:12438–12448, 2020.
- Sliced score matching: A scalable approach to density and score estimation. In Uncertainty in Artificial Intelligence, pp. 574–584. PMLR, 2020.
- Score-based generative modeling through stochastic differential equations. ArXiv, abs/2011.13456, 2021b.
- Pointdp: Diffusion-driven purification against adversarial attacks on 3d point cloud recognition. ArXiv, abs/2208.09801, 2022.
- Bayesian generative active deep learning. ArXiv, abs/1904.11643, 2019.
- Robustness may be at odds with accuracy. arXiv: Machine Learning, 2018.
- Representation learning with contrastive predictive coding. ArXiv, abs/1807.03748, 2018.
- Sample-efficient neural architecture search by learning action space. ArXiv, abs/1906.06832, 2019.
- Learning fast samplers for diffusion models by differentiating through sample quality. In ICLR, 2022.
- Theoretical analysis of self-training with deep networks on unlabeled data. In International Conference on Learning Representations, 2020.
- Diffusion models: A comprehensive survey of methods and applications. arXiv preprint arXiv:2209.00796, 2022.
- Wide residual networks. ArXiv, abs/1605.07146, 2016.
- Decoupled adversarial contrastive learning for self-supervised adversarial robustness. ArXiv, abs/2207.10899, 2022.
- Theoretically principled trade-off between robustness and accuracy. ArXiv, abs/1901.08573, 2019.
- Yidong Ouyang (6 papers)
- Liyan Xie (34 papers)
- Guang Cheng (136 papers)