Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transferable Adversarial Examples for Anchor Free Object Detection (2106.01618v2)

Published 3 Jun 2021 in cs.CV

Abstract: Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbation can completely change prediction result. The vulnerability has led to a surge of research in this direction, including adversarial attacks on object detection networks. However, previous studies are dedicated to attacking anchor-based object detectors. In this paper, we present the first adversarial attack on anchor-free object detectors. It conducts category-wise, instead of previously instance-wise, attacks on object detectors, and leverages high-level semantic information to efficiently generate transferable adversarial examples, which can also be transferred to attack other object detectors, even anchor-based detectors such as Faster R-CNN. Experimental results on two benchmark datasets demonstrate that our proposed method achieves state-of-the-art performance and transferability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Quanyu Liao (5 papers)
  2. Xin Wang (1307 papers)
  3. Bin Kong (15 papers)
  4. Siwei Lyu (125 papers)
  5. Bin Zhu (218 papers)
  6. Youbing Yin (12 papers)
  7. Qi Song (73 papers)
  8. Xi Wu (100 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.