Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Using Non-invertible Data Transformations to Build Adversarial-Robust Neural Networks (1610.01934v5)

Published 6 Oct 2016 in cs.LG

Abstract: Deep neural networks have proven to be quite effective in a wide variety of machine learning tasks, ranging from improved speech recognition systems to advancing the development of autonomous vehicles. However, despite their superior performance in many applications, these models have been recently shown to be susceptible to a particular type of attack possible through the generation of particular synthetic examples referred to as adversarial samples. These samples are constructed by manipulating real examples from the training data distribution in order to "fool" the original neural model, resulting in misclassification (with high confidence) of previously correctly classified samples. Addressing this weakness is of utmost importance if deep neural architectures are to be applied to critical applications, such as those in the domain of cybersecurity. In this paper, we present an analysis of this fundamental flaw lurking in all neural architectures to uncover limitations of previously proposed defense mechanisms. More importantly, we present a unifying framework for protecting deep neural models using a non-invertible data transformation--developing two adversary-resilient architectures utilizing both linear and nonlinear dimensionality reduction. Empirical results indicate that our framework provides better robustness compared to state-of-art solutions while having negligible degradation in accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Qinglong Wang (18 papers)
  2. Wenbo Guo (40 papers)
  3. Alexander G. Ororbia II (14 papers)
  4. Xinyu Xing (34 papers)
  5. Lin Lin (277 papers)
  6. C. Lee Giles (69 papers)
  7. Xue Liu (156 papers)
  8. Peng Liu (372 papers)
  9. Gang Xiong (37 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.