Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Localized Uncertainty Attacks (2106.09222v1)

Published 17 Jun 2021 in stat.ML, cs.CR, cs.CV, and cs.LG

Abstract: The susceptibility of deep learning models to adversarial perturbations has stirred renewed attention in adversarial examples resulting in a number of attacks. However, most of these attacks fail to encompass a large spectrum of adversarial perturbations that are imperceptible to humans. In this paper, we present localized uncertainty attacks, a novel class of threat models against deterministic and stochastic classifiers. Under this threat model, we create adversarial examples by perturbing only regions in the inputs where a classifier is uncertain. To find such regions, we utilize the predictive uncertainty of the classifier when the classifier is stochastic or, we learn a surrogate model to amortize the uncertainty when it is deterministic. Unlike $\ell_p$ ball or functional attacks which perturb inputs indiscriminately, our targeted changes can be less perceptible. When considered under our threat model, these attacks still produce strong adversarial examples; with the examples retaining a greater degree of similarity with the inputs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ousmane Amadou Dia (2 papers)
  2. Theofanis Karaletsos (28 papers)
  3. Caner Hazirbas (19 papers)
  4. Cristian Canton Ferrer (32 papers)
  5. Ilknur Kaynar Kabul (3 papers)
  6. Erik Meijer (10 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.