Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 95 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 90 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Kimi K2 192 tok/s Pro
2000 character limit reached

AutoAugment Input Transformation for Highly Transferable Targeted Attacks (2312.14218v1)

Published 21 Dec 2023 in cs.CV

Abstract: Deep Neural Networks (DNNs) are widely acknowledged to be susceptible to adversarial examples, wherein imperceptible perturbations are added to clean examples through diverse input transformation attacks. However, these methods originally designed for non-targeted attacks exhibit low success rates in targeted attacks. Recent targeted adversarial attacks mainly pay attention to gradient optimization, attempting to find the suitable perturbation direction. However, few of them are dedicated to input transformation.In this work, we observe a positive correlation between the logit/probability of the target class and diverse input transformation methods in targeted attacks. To this end, we propose a novel targeted adversarial attack called AutoAugment Input Transformation (AAIT). Instead of relying on hand-made strategies, AAIT searches for the optimal transformation policy from a transformation space comprising various operations. Then, AAIT crafts adversarial examples using the found optimal transformation policy to boost the adversarial transferability in targeted attacks. Extensive experiments conducted on CIFAR-10 and ImageNet-Compatible datasets demonstrate that the proposed AAIT surpasses other transfer-based targeted attacks significantly.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Improving the transferability of targeted adversarial examples through object-based diverse input. In CVPR, pages 15223–15232, 2022.
  2. Introducing competition to boost the transferability of targeted adversarial examples through clean feature mixup. In CVPR, pages 24648–24657, 2023.
  3. François Chollet. Xception: Deep learning with depthwise separable convolutions. In CVPR, pages 1800–1807, 2017.
  4. Twins: Revisiting the design of spatial attention in vision transformers. In NeurIPS, pages 9355–9366, 2021.
  5. Convit: Improving vision transformers with soft convolutional inductive biases. In ICML, pages 2286–2296, 2021.
  6. Autoaugment: Learning augmentation policies from data. In CVPR, 2019.
  7. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL, pages 4171–4186, 2019.
  8. Evading defenses to transferable adversarial examples by translation-invariant attacks. In CVPR.
  9. Boosting adversarial attacks with momentum. In CVPR, pages 9185–9193, 2018.
  10. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
  11. Explaining and harnessing adversarial examples. In ICLR, 2015.
  12. Levit: a vision transformer in convnet’s clothing for faster inference. In ICCV, pages 12239–12249, 2021.
  13. Faster autoaugment: Learning augmentation strategies using backpropagation. In ECCV, 2020.
  14. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
  15. Rethinking spatial dimensions of vision transformers. In ICCV, pages 11916–11925, 2021.
  16. Population based augmentation: Efficient learning of augmentation policy schedules. In ICML, 2019.
  17. Densely connected convolutional networks. In CVPR, pages 2261–2269, 2017.
  18. Transferable perturbations of deep feature distributions. In ICLR, 2020a.
  19. Perturbing across the feature hierarchy to improve standard and strict blackbox attack transferability. In NeurIPS, 2020b.
  20. Imagenet classification with deep convolutional neural networks. In NeurIPS, pages 1106–1114, 2012.
  21. Adversarial examples in the physical world. In ICLR Workshop Track Proceedings, 2017.
  22. Towards transferable targeted attack. In CVPR, pages 638–646, 2020.
  23. Fast autoaugment. In NeurIPS, 2019.
  24. Nesterov accelerated gradient and scale invariance for adversarial attacks. In ICLR, 2020.
  25. On generating transferable targeted perturbations. In ICCV, pages 7688–7697, 2021.
  26. Improving adversarial robustness via promoting ensemble diversity. In ICML, pages 4970–4979, 2019.
  27. Mobilenetv2: Inverted residuals and linear bottlenecks. In CVPR, pages 4510–4520, 2018.
  28. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV, 2017.
  29. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  30. Rethinking the inception architecture for computer vision. In CVPR, pages 2818–2826, 2016.
  31. Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI, pages 4278–4284, 2017.
  32. Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML, pages 6105–6114, 2019.
  33. Attention is all you need. In NeurIPS, pages 5998–6008, 2017.
  34. Enhancing the transferability of adversarial attacks through variance tuning. In CVPR, pages 1924–1933, 2021.
  35. Admix: Enhancing the transferability of adversarial attacks. In ICCV, pages 16138–16147, 2021a.
  36. Feature importance-aware transferable adversarial attacks. In ICCV, pages 7619–7628, 2021b.
  37. Enhancing the self-universality for transferable targeted attacks. In CVPR, pages 12281–12290, 2023.
  38. Boosting the transferability of adversarial samples via attention. In CVPR, pages 1158–1167, 2020.
  39. Improving transferability of adversarial examples with input diversity. In CVPR, pages 2730–2739, 2019.
  40. Dverge: diversifying vulnerabilities for enhanced robust generation of ensembles. In NeurIPS, pages 5505–5515, 2020.
  41. On success and simplicity: A second look at transferable targeted attacks. In NeurIPS, pages 6115–6128, 2021.
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (3)