Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions (1910.01329v1)

Published 3 Oct 2019 in cs.LG, cs.CR, and stat.ML

Abstract: Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier. Recently, various kinds of adversarial attack methods have been proposed, most of which focus on adding small perturbations to input images. Despite the success of existing approaches, the way to generate realistic adversarial images with small perturbations remains a challenging problem. In this paper, we aim to address this problem by proposing a novel adversarial method, which generates adversarial examples by imposing not only perturbations but also spatial distortions on input images, including scaling, rotation, shear, and translation. As humans are less susceptible to small spatial distortions, the proposed approach can produce visually more realistic attacks with smaller perturbations, able to deceive classifiers without affecting human predictions. We learn our method by amortized techniques with neural networks and generate adversarial examples efficiently by a forward pass of the networks. Extensive experiments on attacking different types of non-robustified classifiers and robust classifiers with defence show that our method has state-of-the-art performance in comparison with advanced attack parallels.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. He Zhao (117 papers)
  2. Trung Le (94 papers)
  3. Paul Montague (27 papers)
  4. Olivier De Vel (8 papers)
  5. Tamas Abraham (14 papers)
  6. Dinh Phung (147 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.