Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Data-Free Adversarial Perturbations for Practical Black-Box Attack (2003.01295v1)

Published 3 Mar 2020 in cs.CV

Abstract: Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted to fool pre-trained models. Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model. However, existing black-box attack methods require samples from the training data distribution to improve the transferability of adversarial examples across different models. Because of the data dependence, the fooling ability of adversarial perturbations is only applicable when training data are accessible. In this paper, we present a data-free method for crafting adversarial perturbations that can fool a target model without any knowledge about the training data distribution. In the practical setting of a black-box attack scenario where attackers do not have access to target models and training data, our method achieves high fooling rates on target models and outperforms other universal adversarial perturbation methods. Our method empirically shows that current deep learning models are still at risk even when the attackers do not have access to training data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yulong Wang (58 papers)
  2. Xiaolu Zhang (39 papers)
  3. Lin Shang (5 papers)
  4. Chilin Fu (5 papers)
  5. Jun Zhou (370 papers)
  6. Zhaoxin Huan (10 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.