Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Augmenting Model Robustness with Transformation-Invariant Attacks (1901.11188v2)

Published 31 Jan 2019 in cs.CV

Abstract: The vulnerability of neural networks under adversarial attacks has raised serious concerns and motivated extensive research. It has been shown that both neural networks and adversarial attacks against them can be sensitive to input transformations such as linear translation and rotation, and that human vision, which is robust against adversarial attacks, is invariant to natural input transformations. Based on these, this paper tests the hypothesis that model robustness can be further improved when it is adversarially trained against transformed attacks and transformation-invariant attacks. Experiments on MNIST, CIFAR-10, and restricted ImageNet show that while transformations of attacks alone do not affect robustness, transformation-invariant attacks can improve model robustness by 2.5\% on MNIST, 3.7\% on CIFAR-10, and 1.1\% on restricted ImageNet. We discuss the intuition behind this phenomenon.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Houpu Yao (6 papers)
  2. Zhe Wang (574 papers)
  3. Guangyu Nie (2 papers)
  4. Yassine Mazboudi (1 paper)
  5. Yezhou Yang (119 papers)
  6. Yi Ren (215 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.