Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Boosting the Transferability of Adversarial Examples via Local Mixup and Adaptive Step Size (2401.13205v1)

Published 24 Jan 2024 in cs.CV and cs.AI

Abstract: Adversarial examples are one critical security threat to various visual applications, where injected human-imperceptible perturbations can confuse the output.Generating transferable adversarial examples in the black-box setting is crucial but challenging in practice. Existing input-diversity-based methods adopt different image transformations, but may be inefficient due to insufficient input diversity and an identical perturbation step size. Motivated by the fact that different image regions have distinctive weights in classification, this paper proposes a black-box adversarial generative framework by jointly designing enhanced input diversity and adaptive step sizes. We design local mixup to randomly mix a group of transformed adversarial images, strengthening the input diversity. For precise adversarial generation, we project the perturbation into the $tanh$ space to relax the boundary constraint. Moreover, the step sizes of different regions can be dynamically adjusted by integrating a second-order momentum.Extensive experiments on ImageNet validate that our framework can achieve superior transferability compared to state-of-the-art baselines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39–57. Ieee, 2017.
  2. An adaptive model ensemble adversarial attack for boosting adversarial transferability. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4489–4498, 2023.
  3. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9185–9193, 2018.
  4. Evading defenses to transferable adversarial examples by translation-invariant attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  5. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1625–1634, 2018.
  6. Ian J Goodfellow. Shlens j szegedy c explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, 2015.
  7. Identity mappings in deep residual networks. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, pages 630–645. Springer, 2016.
  8. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
  9. Feature space perturbations yield more transferable adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7066–7074, 2019.
  10. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), San Diega, CA, USA, 2015.
  11. Adversarial attacks and defences competition. CoRR, abs/1804.00097, 2018.
  12. Towards transferable targeted attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 641–649, 2020.
  13. Nesterov accelerated gradient and scale invariance for adversarial attacks. In International Conference on Learning Representations, 2019.
  14. Frequency domain model augmentation for adversarial attack. In European Conference on Computer Vision, pages 549–566. Springer, 2022.
  15. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016.
  16. Visual explanation by attention branch network for end-to-end learning-based self-driving. In 2019 IEEE intelligent vehicles symposium (IV), pages 1577–1582. IEEE, 2019.
  17. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211–252, 2015.
  18. Deep reinforcement learning framework for autonomous driving. Electronic Imaging, 29:70–76, 2017.
  19. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations (ICLR 2015). Computational and Biological Learning Society, 2015.
  20. Adaptive face forgery detection in cross domain. In European Conference on Computer Vision, pages 467–484. Springer, 2022.
  21. Hybrid batch attacks: Finding black-box adversarial examples with limited queries. In 29th USENIX Security Symposium (USENIX Security 20), pages 1327–1344, 2020.
  22. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  23. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
  24. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5265–5274, 2018.
  25. Enhancing the transferability of adversarial attacks through variance tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1924–1933, 2021.
  26. Triangle attack: A query-efficient decision-based adversarial attack. In European Conference on Computer Vision, pages 156–174. Springer, 2022.
  27. Structure invariant transformation for better adversarial transferability. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4607–4619, 2023.
  28. Untargeted adversarial attack via expanding the semantic gap. In 2019 IEEE International Conference on Multimedia and Expo (ICME), pages 514–519. IEEE, 2019.
  29. Improving the transferability of adversarial samples with adversarial transformations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9024–9033, 2021.
  30. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2730–2739, 2019.
  31. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492–1500, 2017.
  32. Stochastic variance reduced ensemble adversarial attack for boosting the adversarial transferability. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14983–14992, 2022.
  33. Natural color fool: Towards boosting black-box unrestricted attacks. Advances in Neural Information Processing Systems, 35:7546–7560, 2022.
  34. Wide residual networks. In British Machine Vision Conference 2016. British Machine Vision Association, 2016.
  35. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018.
  36. Beyond imagenet attack: Towards crafting adversarial examples for black-box domains. In International Conference on Learning Representations, 2021.
  37. On success and simplicity: A second look at transferable targeted attacks. Advances in Neural Information Processing Systems, 34:6115–6128, 2021.
  38. Improving the transferability of adversarial examples with resized-diverse-inputs, diversity-ensemble and region fitting. In European Conference on Computer Vision, pages 563–579. Springer, 2020.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets