Papers
Topics
Authors
Recent
Search
2000 character limit reached

Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?

Published 14 Feb 2019 in cs.LG, cs.CR, cs.CV, and stat.ML | (1902.05586v1)

Abstract: Convolutional Neural Networks and Deep Learning classification systems in general have been shown to be vulnerable to attack by specially crafted data samples that appear to belong to one class but are instead classified as another, commonly known as adversarial examples. A variety of attack strategies have been proposed to craft these samples; however, there is no standard model that is used to compare the success of each type of attack. Furthermore, there is no literature currently available that evaluates how common hyperparameters and optimization strategies may impact a model's ability to resist these samples. This research bridges that lack of awareness and provides a means for the selection of training and model parameters in future research on evasion attacks against convolutional neural networks. The findings of this work indicate that the selection of model hyperparameters does impact the ability of a model to resist attack, although they alone cannot prevent the existence of adversarial examples.

Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.