Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection (2003.04367v4)

Published 10 Feb 2020 in cs.CV, cs.CR, and cs.LG

Abstract: Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbations can completely change the classification results. Their vulnerability has led to a surge of research in this direction. However, most works dedicated to attacking anchor-based object detection models. In this work, we aim to present an effective and efficient algorithm to generate adversarial examples to attack anchor-free object models based on two approaches. First, we conduct category-wise instead of instance-wise attacks on the object detectors. Second, we leverage the high-level semantic information to generate the adversarial examples. Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors, even anchor-based detectors such as Faster R-CNN.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Quanyu Liao (5 papers)
  2. Xin Wang (1308 papers)
  3. Bin Kong (15 papers)
  4. Siwei Lyu (125 papers)
  5. Youbing Yin (12 papers)
  6. Qi Song (73 papers)
  7. Xi Wu (100 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.