Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fast Local Attack: Generating Local Adversarial Examples for Object Detectors (2010.14291v1)

Published 27 Oct 2020 in cs.CV

Abstract: The deep neural network is vulnerable to adversarial examples. Adding imperceptible adversarial perturbations to images is enough to make them fail. Most existing research focuses on attacking image classifiers or anchor-based object detectors, but they generate globally perturbation on the whole image, which is unnecessary. In our work, we leverage higher-level semantic information to generate high aggressive local perturbations for anchor-free object detectors. As a result, it is less computationally intensive and achieves a higher black-box attack as well as transferring attack performance. The adversarial examples generated by our method are not only capable of attacking anchor-free object detectors, but also able to be transferred to attack anchor-based object detector.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Quanyu Liao (5 papers)
  2. Xin Wang (1308 papers)
  3. Bin Kong (15 papers)
  4. Siwei Lyu (125 papers)
  5. Youbing Yin (12 papers)
  6. Qi Song (73 papers)
  7. Xi Wu (100 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.