Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Intermediate Level Adversarial Attack for Enhanced Transferability (1811.08458v1)

Published 20 Nov 2018 in cs.LG, cs.CV, and stat.ML

Abstract: Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool trained models. Adversarial examples often exhibit black-box transfer, meaning that adversarial examples for one model can fool another model. However, adversarial examples may be overfit to exploit the particular architecture and feature representation of a source model, resulting in sub-optimal black-box transfer attacks to other target models. This leads us to introduce the Intermediate Level Attack (ILA), which attempts to fine-tune an existing adversarial example for greater black-box transferability by increasing its perturbation on a pre-specified layer of the source model. We show that our method can effectively achieve this goal and that we can decide a nearly-optimal layer of the source model to perturb without any knowledge of the target models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Qian Huang (55 papers)
  2. Zeqi Gu (8 papers)
  3. Isay Katsman (12 papers)
  4. Horace He (12 papers)
  5. Pian Pawakapan (3 papers)
  6. Zhiqiu Lin (19 papers)
  7. Serge Belongie (125 papers)
  8. Ser-Nam Lim (116 papers)
Citations (4)