Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection (2003.04367v4)
Abstract: Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbations can completely change the classification results. Their vulnerability has led to a surge of research in this direction. However, most works dedicated to attacking anchor-based object detection models. In this work, we aim to present an effective and efficient algorithm to generate adversarial examples to attack anchor-free object models based on two approaches. First, we conduct category-wise instead of instance-wise attacks on the object detectors. Second, we leverage the high-level semantic information to generate the adversarial examples. Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors, even anchor-based detectors such as Faster R-CNN.
- Quanyu Liao (5 papers)
- Xin Wang (1308 papers)
- Bin Kong (15 papers)
- Siwei Lyu (125 papers)
- Youbing Yin (12 papers)
- Qi Song (73 papers)
- Xi Wu (100 papers)