Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PDA: Progressive Data Augmentation for General Robustness of Deep Neural Networks (1909.04839v3)

Published 11 Sep 2019 in cs.LG, cs.CV, and stat.ML

Abstract: Adversarial images are designed to mislead deep neural networks (DNNs), attracting great attention in recent years. Although several defense strategies achieved encouraging robustness against adversarial samples, most of them fail to improve the robustness on common corruptions such as noise, blur, and weather/digital effects (e.g. frost, pixelate). To address this problem, we propose a simple yet effective method, named Progressive Data Augmentation (PDA), which enables general robustness of DNNs by progressively injecting diverse adversarial noises during training. In other words, DNNs trained with PDA are able to obtain more robustness against both adversarial attacks as well as common corruptions than the recent state-of-the-art methods. We also find that PDA is more efficient than prior arts and able to prevent accuracy drop on clean samples without being attacked. Furthermore, we theoretically show that PDA can control the perturbation bound and guarantee better generalization ability than existing work. Extensive experiments on many benchmarks such as CIFAR-10, SVHN, and ImageNet demonstrate that PDA significantly outperforms its counterparts in various experimental setups.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hang Yu (241 papers)
  2. Aishan Liu (72 papers)
  3. Xianglong Liu (128 papers)
  4. Gengchao Li (1 paper)
  5. Ping Luo (340 papers)
  6. Ran Cheng (130 papers)
  7. Jichen Yang (28 papers)
  8. Chongzhi Zhang (14 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.