Papers
Topics
Authors
Recent
Search
2000 character limit reached

GADT: Enhancing Transferable Adversarial Attacks through Gradient-guided Adversarial Data Transformation

Published 24 Oct 2024 in cs.AI | (2410.18648v1)

Abstract: Current Transferable Adversarial Examples (TAE) are primarily generated by adding Adversarial Noise (AN). Recent studies emphasize the importance of optimizing Data Augmentation (DA) parameters along with AN, which poses a greater threat to real-world AI applications. However, existing DA-based strategies often struggle to find optimal solutions due to the challenging DA search procedure without proper guidance. In this work, we propose a novel DA-based attack algorithm, GADT. GADT identifies suitable DA parameters through iterative antagonism and uses posterior estimates to update AN based on these parameters. We uniquely employ a differentiable DA operation library to identify adversarial DA parameters and introduce a new loss function as a metric during DA optimization. This loss term enhances adversarial effects while preserving the original image content, maintaining attack crypticity. Extensive experiments on public datasets with various networks demonstrate that GADT can be integrated with existing transferable attack methods, updating their DA parameters effectively while retaining their AN formulation strategies. Furthermore, GADT can be utilized in other black-box attack scenarios, e.g., query-based attacks, offering a new avenue to enhance attacks on real-world AI applications in both research and industrial contexts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Elasticface: Elastic margin loss for deep face recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.
  2. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition.
  3. Evading defenses to transferable adversarial examples by translation-invariant attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
  4. Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
  5. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
  6. Sampling-based fast gradient rescaling method for highly transferable adversarial attacks. arXiv preprint arXiv:2307.02828.
  7. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition.
  8. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition.
  9. Machine learning and deep learning approaches for brain disease diagnosis: principles and recent advances. IEEE Access.
  10. Adversarial Attacks and Defences Competition. arXiv:1804.00097.
  11. Adversarial attacks and defences competition. In The NIPS’17 Competition: Building Intelligent Systems.
  12. Semantic-sam: Segment and recognize anything at any granularity. arXiv preprint arXiv:2307.04767.
  13. Defense against adversarial attacks using high-level representation guided denoiser. In Proceedings of the IEEE conference on computer vision and pattern recognition.
  14. Nesterov accelerated gradient and scale invariance for adversarial attacks. arXiv preprint arXiv:1908.06281.
  15. Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping. arXiv preprint arXiv:2402.03951.
  16. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770.
  17. Magface: A universal representation for face recognition and quality assessment. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.
  18. Learning transferable visual models from natural language supervision. In International conference on machine learning.
  19. CGBA: Curvature-aware Geometric Black-box Attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
  20. Kornia: an open source differentiable computer vision library for pytorch. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.
  21. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  22. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI conference on artificial intelligence, volume 31.
  23. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition.
  24. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204.
  25. Adversarial Attack Based on Prediction-Correction. arXiv preprint arXiv:2306.01809.
  26. Enhancing the transferability of adversarial attacks through variance tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
  27. Admix: Enhancing the transferability of adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision.
  28. Improving the transferability of adversarial samples with adversarial transformations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.
  29. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.
  30. Stochastic variance reduced ensemble adversarial attack for boosting the adversarial transferability. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
  31. ILA-DA: Improving Transferability of Intermediate Level Attack with Data Augmentation. In The Eleventh International Conference on Learning Representations.
  32. Improving the Transferability of Adversarial Examples via Direction Tuning. arXiv preprint arXiv:2303.15109.
  33. Transferable adversarial perturbations. In Proceedings of the European Conference on Computer Vision (ECCV).
  34. Boosting adversarial transferability via gradient relevance attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.