Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Adversarial Behavior of DNNs by Disentangling Non-Robust and Robust Components in Performance Metric (1906.02494v1)

Published 6 Jun 2019 in stat.ML and cs.LG

Abstract: The vulnerability to slight input perturbations is a worrying yet intriguing property of deep neural networks (DNNs). Despite many previous works studying the reason behind such adversarial behavior, the relationship between the generalization performance and adversarial behavior of DNNs is still little understood. In this work, we reveal such relation by introducing a metric characterizing the generalization performance of a DNN. The metric can be disentangled into an information-theoretic non-robust component, responsible for adversarial behavior, and a robust component. Then, we show by experiments that current DNNs rely heavily on optimizing the non-robust component in achieving decent performance. We also demonstrate that current state-of-the-art adversarial training algorithms indeed try to robustify the DNNs by preventing them from using the non-robust component to distinguish samples from different categories. Also, based on our findings, we take a step forward and point out the possible direction for achieving decent standard performance and adversarial robustness simultaneously. We believe that our theory could further inspire the community to make more interesting discoveries about the relationship between standard generalization and adversarial generalization of deep learning models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yujun Shi (23 papers)
  2. Benben Liao (14 papers)
  3. Guangyong Chen (55 papers)
  4. Yun Liu (213 papers)
  5. Ming-Ming Cheng (185 papers)
  6. Jiashi Feng (295 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.