Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Boosting Adversarial Transferability by Block Shuffle and Rotation (2308.10299v3)

Published 20 Aug 2023 in cs.CV and eess.IV

Abstract: Adversarial examples mislead deep neural networks with imperceptible perturbations and have brought significant threats to deep learning. An important aspect is their transferability, which refers to their ability to deceive other models, thus enabling attacks in the black-box setting. Though various methods have been proposed to boost transferability, the performance still falls short compared with white-box attacks. In this work, we observe that existing input transformation based attacks, one of the mainstream transfer-based attacks, result in different attention heatmaps on various models, which might limit the transferability. We also find that breaking the intrinsic relation of the image can disrupt the attention heatmap of the original image. Based on this finding, we propose a novel input transformation based attack called block shuffle and rotation (BSR). Specifically, BSR splits the input image into several blocks, then randomly shuffles and rotates these blocks to construct a set of new images for gradient calculation. Empirical evaluations on the ImageNet dataset demonstrate that BSR could achieve significantly better transferability than the existing input transformation based methods under single-model and ensemble-model settings. Combining BSR with the current input transformation method can further improve the transferability, which significantly outperforms the state-of-the-art methods. Code is available at https://github.com/Trustworthy-AI-Group/BSR

Definition Search Book Streamline Icon: https://streamlinehq.com
References (57)
  1. Towards Evaluating the Robustness of Neural Networks. In IEEE Symposium on Security and Privacy, pages 39–57, 2017.
  2. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, 2019.
  3. Boosting Adversarial Attacks with Momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9185–9193, 2018.
  4. Evading Defenses to Transferable Adversarial Examples by Translation-invariant Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4312–4321, 2019.
  5. Robust Physical-World Attacks on Deep Learning Visual Classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1625–1634, 2018.
  6. Boosting Adversarial Transferability by Achieving Flat Local Maxima. In Proceedings of the Advances in Neural Information Processing Systems, 2023a.
  7. Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer. In Proceedings of the ACM International Conference on Multimedia, page 4440–4449, 2023b.
  8. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
  9. Scalable Verified Training for Provably Robust Image Classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4842–4851, 2019.
  10. Countering adversarial images using input transformations. International Conference on Learning Representations, 2018.
  11. Simple Black-box Adversarial Attacks. In International Conference on Machine Learning, pages 2484–2493, 2019.
  12. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
  13. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4700–4708, 2017.
  14. Adversarial Examples in the Physical World. In International Conference on Learning Representations (Workshop), 2017.
  15. QEBA: Query-Efficient Boundary-Based Blackbox Attack. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1218–1227, 2020a.
  16. Learning Transferable Adversarial Examples via Ghost Networks. In AAAI Conference on Artificial Intelligence, 2020b.
  17. Defense against Adversarial Attacks using High-level Representation Guided Denoiser. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1778–1787, 2018.
  18. Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks. In International Conference on Learning Representations, 2020.
  19. Delving into transferable adversarial examples and black-box attacks. In International Conference on Learning Representations, 2017.
  20. Feature distillation: Dnn-oriented jpeg compresstion against adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 860–868, 2019.
  21. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
  22. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations, 2018.
  23. A Self-supervised Approach for Adversarial Robustness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 262–271, 2020.
  24. Imagenet large scale visual recognition challenge. In International Journal of Computer Vision, pages 211–252, 2015.
  25. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506–519, 2017.
  26. You Only Look Once: Unified, Real-time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 779–788, 2016.
  27. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, 2015.
  28. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2921–2929, 2016.
  29. Grad-cam: Visual explanations from deep networks via gradient-based localization. In International Conference on Computer Vision, pages 618–626, 2017.
  30. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, pages 1528–1540, 2016.
  31. Smoothgrad: Removing Noise by Adding Noise. arXiv preprint arXiv:1706.03825, 2017.
  32. Robust Local Features for Improving the generalization of adversarial training. In International Conference on Learning Representaion, 2020.
  33. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319–3328, 2017.
  34. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
  35. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016.
  36. Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI Conference on Artificial Intelligence, 2017.
  37. Ensemble Adversarial Training: Attacks and Defenses. International Conference on Learning Representations, 2018.
  38. Adversarial Risk and the Dangers of Evaluating against Weak Attacks. In International Conference on Machine Learning, pages 5025–5034, 2018.
  39. CosFace: Large Margin Cosine Loss for Deep Face Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5265–5274, 2018.
  40. Enhancing the transferability of adversarial attacks through variance tuning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021.
  41. AT-GAN: A Generative Attack Model for Adversarial Transferring on Generative Adversarial Nets. arXiv preprint arXiv:1904.07793, 2019.
  42. Admix: Enhancing the transferability of adversarial attacks. In International Conference on Computer Vision, pages 16138–16147, 2021a.
  43. Boosting Adversarial Transferability through Enhanced Momentum. In British Machine Vision Conference, page 272, 2021b.
  44. Triangle Attack: A Query-efficient Decision-based Adversarial Attack. In European conference on computer vision, 2022.
  45. Rethinking the Backward Propagation for Adversarial Transferability. In Proceedings of the Advances in Neural Information Processing Systems, 2023a.
  46. Structure Invariant Transformation for better Adversarial Transferability. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4607–4619, 2023b.
  47. Feature importance-aware transferable adversarial attacks. In International Conference on Computer Vision, pages 7639–7648, 2021c.
  48. Diversifying the High-level Features for better Adversarial Transferability. In Proceedings of the British Machine Vision Conference, 2023c.
  49. Boosting the Transferability of Adversarial Samples via Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1161–1170, 2020.
  50. Mitigating adversarial effects through randomization. In International Conference on Learning Representations, 2018.
  51. Improving Transferability of Adversarial Examples with Input Diversity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2730–2739, 2019.
  52. Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14983–14992, 2022.
  53. Feature squeezing: Detecting adversarial examples in deep neural networks. Network and Distributed System Security Symposium, 2018.
  54. Towards Stable and Efficient Training of Verifiably Robust Neural Networks. In International Conference on Learning Representations, 2019.
  55. Improving the Transferability of Adversarial Samples by Path-Augmented Method. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8173–8182, 2023.
  56. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2921–2929, 2016.
  57. Transferable adversarial perturbations. In Proceedings of the European Conference on Computer Vision (ECCV), pages 452–467, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kunyu Wang (21 papers)
  2. Xuanran He (2 papers)
  3. Wenxuan Wang (128 papers)
  4. Xiaosen Wang (30 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.