Transferable Adversarial Examples for Anchor Free Object Detection (2106.01618v2)
Abstract: Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbation can completely change prediction result. The vulnerability has led to a surge of research in this direction, including adversarial attacks on object detection networks. However, previous studies are dedicated to attacking anchor-based object detectors. In this paper, we present the first adversarial attack on anchor-free object detectors. It conducts category-wise, instead of previously instance-wise, attacks on object detectors, and leverages high-level semantic information to efficiently generate transferable adversarial examples, which can also be transferred to attack other object detectors, even anchor-based detectors such as Faster R-CNN. Experimental results on two benchmark datasets demonstrate that our proposed method achieves state-of-the-art performance and transferability.
- Quanyu Liao (5 papers)
- Xin Wang (1307 papers)
- Bin Kong (15 papers)
- Siwei Lyu (125 papers)
- Bin Zhu (218 papers)
- Youbing Yin (12 papers)
- Qi Song (73 papers)
- Xi Wu (100 papers)