Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Perlin Noise Improve Adversarial Robustness (2112.13408v1)

Published 26 Dec 2021 in cs.LG and cs.AI

Abstract: Adversarial examples are some special input that can perturb the output of a deep neural network, in order to make produce intentional errors in the learning algorithms in the production environment. Most of the present methods for generating adversarial examples require gradient information. Even universal perturbations that are not relevant to the generative model rely to some extent on gradient information. Procedural noise adversarial examples is a new way of adversarial example generation, which uses computer graphics noise to generate universal adversarial perturbations quickly while not relying on gradient information. Combined with the defensive idea of adversarial training, we use Perlin noise to train the neural network to obtain a model that can defend against procedural noise adversarial examples. In combination with the use of model fine-tuning methods based on pre-trained models, we obtain faster training as well as higher accuracy. Our study shows that procedural noise adversarial examples are defensible, but why procedural noise can generate adversarial examples and how to defend against other kinds of procedural noise adversarial examples that may emerge in the future remain to be investigated.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chengjun Tang (1 paper)
  2. Kun Zhang (353 papers)
  3. Chunfang Xing (1 paper)
  4. Yong Ding (30 papers)
  5. Zengmin Xu (1 paper)
Citations (2)

Summary

We haven't generated a summary for this paper yet.