Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN (1705.03387v3)

Published 9 May 2017 in cs.LG and stat.ML

Abstract: We propose a novel technique to make neural network robust to adversarial examples using a generative adversarial network. We alternately train both classifier and generator networks. The generator network generates an adversarial perturbation that can easily fool the classifier network by using a gradient of each image. Simultaneously, the classifier network is trained to classify correctly both original and adversarial images generated by the generator. These procedures help the classifier network to become more robust to adversarial perturbations. Furthermore, our adversarial training framework efficiently reduces overfitting and outperforms other regularization methods such as Dropout. We applied our method to supervised learning for CIFAR datasets, and experimantal results show that our method significantly lowers the generalization error of the network. To the best of our knowledge, this is the first method which uses GAN to improve supervised learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Tensorflow: Large-scale machine learning on heterogeneous distributed systems, 2015.
  2. BEGAN: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017.
  3. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pages 249–256, 2010.
  4. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014a.
  5. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
  6. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014b.
  7. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97, 2012.
  8. S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
  9. D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  10. A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
  11. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  12. Distributional smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677, 2015.
  13. Distillation as a defense to adversarial perturbations against deep neural networks. In Security and Privacy (SP), 2016 IEEE Symposium on, pages 582–597. IEEE, 2016.
  14. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  15. Understanding adversarial training: Increasing local stability of neural nets through robust optimization. arXiv preprint arXiv:1511.05432, 2015.
  16. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
  17. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
  18. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  19. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Hyeungill Lee (1 paper)
  2. Sungyeob Han (1 paper)
  3. Jungwoo Lee (39 papers)
Citations (149)

Summary

We haven't generated a summary for this paper yet.