Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Properties that allow or prohibit transferability of adversarial attacks among quantized networks (2405.09598v1)

Published 15 May 2024 in cs.LG and cs.AI

Abstract: Deep Neural Networks (DNNs) are known to be vulnerable to adversarial examples. Further, these adversarial examples are found to be transferable from the source network in which they are crafted to a black-box target network. As the trend of using deep learning on embedded devices grows, it becomes relevant to study the transferability properties of adversarial examples among compressed networks. In this paper, we consider quantization as a network compression technique and evaluate the performance of transfer-based attacks when the source and target networks are quantized at different bitwidths. We explore how algorithm specific properties affect transferability by considering various adversarial example generation algorithms. Furthermore, we examine transferability in a more realistic scenario where the source and target networks may differ in bitwidth and other model-related properties like capacity and architecture. We find that although quantization reduces transferability, certain attack types demonstrate an ability to enhance it. Additionally, the average transferability of adversarial examples among quantized versions of a network can be used to estimate the transferability to quantized target networks with varying capacity and architecture.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation. (2013). arXiv:1308.3432 http://arxiv.org/abs/1308.3432
  2. Impact of Low-Bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks. In 2019 International Conference on Cyberworlds (CW) (Kyoto, Japan, 2019-10). IEEE, 308–315. https://doi.org/10.1109/CW.2019.00057
  3. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. (2018). arXiv:1712.04248 http://arxiv.org/abs/1712.04248
  4. Nicholas Carlini and David Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy (SP) (San Jose, CA, USA, 2017-05). IEEE, 39–57. https://doi.org/10.1109/SP.2017.49
  5. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks. (2019), 19.
  6. Explaining and Harnessing Adversarial Examples. (2015). arXiv:1412.6572 http://arxiv.org/abs/1412.6572
  7. Yunhui Guo. 2018. A Survey on Methods and Theories of Quantized Neural Networks. (2018). arXiv:1808.04752 http://arxiv.org/abs/1808.04752
  8. Learning both Weights and Connections for Efficient Neural Networks. (2015). arXiv:1506.02626 http://arxiv.org/abs/1506.02626
  9. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. (2017). arXiv:1704.04861 http://arxiv.org/abs/1704.04861
  10. A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability. (2020). arXiv:1812.08342 http://arxiv.org/abs/1812.08342
  11. Fast and Accurate Quantized Camera Scene Detection on Smartphones, Mobile AI 2021 Challenge: Report. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (Nashville, TN, USA, 2021-06). IEEE, 2558–2568. https://doi.org/10.1109/CVPRW53098.2021.00289
  12. Security Management for Mobile Devices by Face Recognition. In 7th International Conference on Mobile Data Management (MDM’06) (Nara, Japan, 2006). IEEE, 49–49. https://doi.org/10.1109/MDM.2006.138
  13. Adversarial Examples Are Not Bugs, They Are Features. (2019). arXiv:1905.02175 http://arxiv.org/abs/1905.02175
  14. Raghuraman Krishnamoorthi. 2018. Quantizing deep convolutional networks for efficient inference: A whitepaper. (2018). arXiv:1806.08342 http://arxiv.org/abs/1806.08342
  15. Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images. (2009), 60.
  16. Adversarial examples in the physical world. (2017). arXiv:1607.02533 http://arxiv.org/abs/1607.02533
  17. Agnieszka Lazarowska. 2012. Decision support system for collision avoidance at sea. 19 (2012), 19–24. Issue Special. https://doi.org/10.2478/v10012-012-0018-2
  18. Gradient-based learning applied to document recognition. 86, 11 (1998), 2278–2324. https://doi.org/10.1109/5.726791 Conference Name: Proceedings of the IEEE.
  19. Delving into Transferable Adversarial Examples and Black-box Attacks. (2017). arXiv:1611.02770 http://arxiv.org/abs/1611.02770
  20. AI-enabled IoT-Edge Data Analytics for Connected Living. 21, 4 (2021), 1–20. https://doi.org/10.1145/3421510
  21. Artificial intelligence applications in the development of autonomous vehicles: a survey. 7, 2 (2020), 315–329. https://doi.org/10.1109/JAS.2020.1003021
  22. Robustness and Transferability of Universal Attacks on Compressed Models. (2020). arXiv:2012.06024 http://arxiv.org/abs/2012.06024
  23. Universal adversarial perturbations. (2017). arXiv:1610.08401 http://arxiv.org/abs/1610.08401
  24. Ho Namgung and Joo-Sung Kim. 2021. Collision Risk Inference System for Maritime Autonomous Surface Ships Using COLREGs Rules Compliant Collision Avoidance. 9 (2021), 7823–7835. https://doi.org/10.1109/ACCESS.2021.3049238
  25. Speech Recognition Using Deep Neural Networks: A Systematic Review. 7 (2019), 19143–19165. https://doi.org/10.1109/ACCESS.2019.2896880
  26. Adversarial Robustness Toolbox v1.0.0. (2019). arXiv:1807.01069 http://arxiv.org/abs/1807.01069
  27. The Limitations of Deep Learning in Adversarial Settings. (2015). arXiv:1511.07528 http://arxiv.org/abs/1511.07528
  28. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In 2016 IEEE Symposium on Security and Privacy (SP). 582–597. https://doi.org/10.1109/SP.2016.41
  29. Intriguing properties of neural networks. (2014). arXiv:1312.6199 http://arxiv.org/abs/1312.6199
  30. The Space of Transferable Adversarial Examples. (2017). arXiv:1704.03453 http://arxiv.org/abs/1704.03453
  31. Amrita S. Tulshan and Sudhir Namdeorao Dhage. 2019. Survey on Virtual Assistant: Google Assistant, Siri, Cortana, Alexa. 968 (2019), 190–201. https://doi.org/10.1007/978-981-13-5758-9_17 Series Title: Communications in Computer and Information Science.
  32. Understanding and Enhancing the Transferability of Adversarial Examples. (2018). arXiv:1802.09707 http://arxiv.org/abs/1802.09707
  33. Yuxin Wu et al. 2016. Tensorpack. https://github.com/tensorpack
  34. Neural Networks in Medical Imaging Applications: A Survey. (2013), 12.
  35. To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression. (2020). arXiv:1810.00208 http://arxiv.org/abs/1810.00208
  36. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. (2018). arXiv:1606.06160 http://arxiv.org/abs/1606.06160

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com