Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BB-Patch: BlackBox Adversarial Patch-Attack using Zeroth-Order Optimization (2405.06049v1)

Published 9 May 2024 in cs.CV, cs.CR, and cs.LG

Abstract: Deep Learning has become popular due to its vast applications in almost all domains. However, models trained using deep learning are prone to failure for adversarial samples and carry a considerable risk in sensitive applications. Most of these adversarial attack strategies assume that the adversary has access to the training data, the model parameters, and the input during deployment, hence, focus on perturbing the pixel level information present in the input image. Adversarial Patches were introduced to the community which helped in bringing out the vulnerability of deep learning models in a much more pragmatic manner but here the attacker has a white-box access to the model parameters. Recently, there has been an attempt to develop these adversarial attacks using black-box techniques. However, certain assumptions such as availability large training data is not valid for a real-life scenarios. In a real-life scenario, the attacker can only assume the type of model architecture used from a select list of state-of-the-art architectures while having access to only a subset of input dataset. Hence, we propose an black-box adversarial attack strategy that produces adversarial patches which can be applied anywhere in the input image to perform an adversarial attack.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  2. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328, 2017.
  3. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1625–1634, 2018.
  4. Houdini: Fooling deep structured prediction models. arXiv preprint arXiv:1707.05373, 2017.
  5. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39–57. IEEE, 2017.
  6. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models. 17, 2017.
  7. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506–519, 2017.
  8. Explaining and harnessing adversarial examples, 2015.
  9. Adversarial Patch. (Nips), 2017.
  10. ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization. Technical report.
  11. Driver distraction identification with an ensemble of convolutional neural networks. Journal of Advanced Transportation, 2019, 2019.
  12. Real-time distracted driver posture classification. arXiv preprint arXiv:1706.09498, 2017.
  13. On Adversarial Patches: Real-World Attack on ArcFace-100 Face Recognition System. SIBIRCON 2019 - International Multi-Conference on Engineering, Computer and Information Sciences, Proceedings, pages 391–396, 2019.
  14. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4690–4699, 2019.
  15. On Physical Adversarial Patches for Object Detection. 2019.
  16. Synthesizing robust adversarial examples. 35th International Conference on Machine Learning, ICML 2018, 1:449–468, 2018.
  17. DPatch: An adversarial patch attack on object detectors. CEUR Workshop Proceedings, 2301, 2019.
  18. Aniruddha Saha. Role of Spatial Context in Adversarial Robustness for Object Detection. 2020.
  19. Fooling automated surveillance cameras: Adversarial patches to attack person detection. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2019-June:49–55, 2019.
  20. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition.
  21. ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION. Technical report.
  22. MNIST handwritten digit database. 2010.
  23. Cifar-10 (canadian institute for advanced research).
  24. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009.
  25. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  26. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  27. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
  28. Are you paying attention? detecting distracted driving in real-time. In 2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM), pages 171–180, 2019.
  29. Francois Chollet. Xception: Deep learning with depthwise separable convolutions. pages 1800–1807, 07 2017.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Satyadwyoom Kumar (3 papers)
  2. Saurabh Gupta (96 papers)
  3. Arun Balaji Buduru (47 papers)

Summary

We haven't generated a summary for this paper yet.