Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interpreting Adversarial Robustness: A View from Decision Surface in Input Space (1810.00144v2)

Published 29 Sep 2018 in cs.LG, cs.AI, and stat.ML

Abstract: One popular hypothesis of neural network generalization is that the flat local minima of loss surface in parameter space leads to good generalization. However, we demonstrate that loss surface in parameter space has no obvious relationship with generalization, especially under adversarial settings. Through visualizing decision surfaces in both parameter space and input space, we instead show that the geometry property of decision surface in input space correlates well with the adversarial robustness. We then propose an adversarial robustness indicator, which can evaluate a neural network's intrinsic robustness property without testing its accuracy under adversarial attacks. Guided by it, we further propose our robust training method. Without involving adversarial training, our method could enhance network's intrinsic adversarial robustness against various adversarial attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Fuxun Yu (39 papers)
  2. Chenchen Liu (24 papers)
  3. Yanzhi Wang (197 papers)
  4. Liang Zhao (353 papers)
  5. Xiang Chen (343 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.