Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking Mixup for Improving the Adversarial Transferability (2311.17087v1)

Published 28 Nov 2023 in cs.CV

Abstract: Mixup augmentation has been widely integrated to generate adversarial examples with superior adversarial transferability when immigrating from a surrogate model to other models. However, the underlying mechanism influencing the mixup's effect on transferability remains unexplored. In this work, we posit that the adversarial examples located at the convergence of decision boundaries across various categories exhibit better transferability and identify that Admix tends to steer the adversarial examples towards such regions. However, we find the constraint on the added image in Admix decays its capability, resulting in limited transferability. To address such an issue, we propose a new input transformation-based attack called Mixing the Image but Separating the gradienT (MIST). Specifically, MIST randomly mixes the input image with a randomly shifted image and separates the gradient of each loss item for each mixed image. To counteract the imprecise gradient, MIST calculates the gradient on several mixed images for each input sample. Extensive experimental results on the ImageNet dataset demonstrate that MIST outperforms existing SOTA input transformation-based attacks with a clear margin on both Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) w/wo defense mechanisms, supporting MIST's high effectiveness and generality.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. In Proceedings of the International Conference on Learning Representations, 2018.
  2. DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2722–2730, 2015.
  3. Visformer: The Vision-friendly Transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 569–578, 2021.
  4. Certified Adversarial Robustness via Randomized Smoothing. In Proceedings of the International Conference on Machine Learning, pages 1310–1320, 2019.
  5. Boosting Adversarial Attacks With Momentum. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9185–9193, 2018.
  6. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4312–4321, 2019.
  7. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In Proceedings of the International Conference on Learning Representations, 2021.
  8. Robust Physical-World Attacks on Deep Learning Visual Classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1625–1634, 2018.
  9. Patch-Wise Attack for Fooling Deep Neural Network. In Proceedings of the European Conference on Computer Vision, pages 307–322, 2020.
  10. Boosting Adversarial Transferability by Achieving Flat Local Maxima. In Proceedings of the Advances in Neural Information Processing Systems, 2023.
  11. Explaining and Harnessing Adversarial Examples. In Proceedings of the International Conference on Learning Representations, 2015.
  12. Scalable Verified Training for Provably Robust Image Classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4841–4850, 2019.
  13. Countering Adversarial Images Using Input Transformations. In Proceedings of the International Conference on Learning Representations, 2018.
  14. Simple Black-box Adversarial Attacks. In Proceedings of the International Conference on Machine Learning, pages 2484–2493, 2019.
  15. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
  16. Rethinking Spatial Dimensions of Vision Transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11916–11925, 2021.
  17. Densely Connected Convolutional Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2261–2269, 2017.
  18. Black-box Adversarial Attacks with Limited Queries and Information. In Proceedings of the International Conference on Machine Learning, pages 2142–2151, 2018.
  19. Adversarial Examples in the Physical World. In Proceedings of the International Conference on Learning Representations (Workshops), 2017.
  20. Learning Transferable Adversarial Examples via Ghost Networks. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 11458–11465, 2020.
  21. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1778–1787, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Xiaosen Wang (30 papers)
  2. Zeyuan Yin (7 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.