Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Little Robustness Goes a Long Way: Leveraging Robust Features for Targeted Transfer Attacks (2106.02105v2)

Published 3 Jun 2021 in cs.LG and cs.CR

Abstract: Adversarial examples for neural network image classifiers are known to be transferable: examples optimized to be misclassified by a source classifier are often misclassified as well by classifiers with different architectures. However, targeted adversarial examples -- optimized to be classified as a chosen target class -- tend to be less transferable between architectures. While prior research on constructing transferable targeted attacks has focused on improving the optimization procedure, in this work we examine the role of the source classifier. Here, we show that training the source classifier to be "slightly robust" -- that is, robust to small-magnitude adversarial examples -- substantially improves the transferability of class-targeted and representation-targeted adversarial attacks, even between architectures as different as convolutional neural networks and transformers. The results we present provide insight into the nature of adversarial examples as well as the mechanisms underlying so-called "robust" classifiers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jacob M. Springer (5 papers)
  2. Melanie Mitchell (28 papers)
  3. Garrett T. Kenyon (17 papers)
Citations (40)

Summary

We haven't generated a summary for this paper yet.