Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks (1906.07077v1)

Published 17 Jun 2019 in cs.LG, cs.CR, and stat.ML

Abstract: Most state-of-the-art ML classification systems are vulnerable to adversarial perturbations. As a consequence, adversarial robustness poses a significant challenge for the deployment of ML-based systems in safety- and security-critical environments like autonomous driving, disease detection or unmanned aerial vehicles. In the past years we have seen an impressive amount of publications presenting more and more new adversarial attacks. However, the attack research seems to be rather unstructured and new attacks often appear to be random selections from the unlimited set of possible adversarial attacks. With this publication, we present a structured analysis of the adversarial attack creation process. By detecting different building blocks of adversarial attacks, we outline the road to new sets of adversarial attacks. We call this the "attack generator". In the pursuit of this objective, we summarize and extend existing adversarial perturbation taxonomies. The resulting taxonomy is then linked to the application context of computer vision systems for autonomous vehicles, i.e. semantic segmentation and object detection. Finally, in order to prove the usefulness of the attack generator, we investigate existing semantic segmentation attacks with respect to the detected defining components of adversarial attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Felix Assion (5 papers)
  2. Peter Schlicht (22 papers)
  3. Florens Greßner (8 papers)
  4. Wiebke Günther (7 papers)
  5. Fabian Hüger (19 papers)
  6. Nico Schmidt (3 papers)
  7. Umair Rasheed (1 paper)
Citations (14)