Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Impact of Architectural Modifications on Deep Learning Adversarial Robustness (2405.01934v1)

Published 3 May 2024 in cs.CV, cs.AI, cs.CR, and cs.LG

Abstract: Rapid advancements of deep learning are accelerating adoption in a wide variety of applications, including safety-critical applications such as self-driving vehicles, drones, robots, and surveillance systems. These advancements include applying variations of sophisticated techniques that improve the performance of models. However, such models are not immune to adversarial manipulations, which can cause the system to misbehave and remain unnoticed by experts. The frequency of modifications to existing deep learning models necessitates thorough analysis to determine the impact on models' robustness. In this work, we present an experimental evaluation of the effects of model modifications on deep learning model robustness using adversarial attacks. Our methodology involves examining the robustness of variations of models against various adversarial attacks. By conducting our experiments, we aim to shed light on the critical issue of maintaining the reliability and safety of deep learning models in safety- and security-critical applications. Our results indicate the pressing demand for an in-depth assessment of the effects of model changes on the robustness of models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. M. Abuhamad, T. Abuhmed, D. Mohaisen, and D. Nyang, “Large-scale and robust code authorship identification with deep feature learning,” ACM Transactions on Privacy and Security (TOPS), vol. 24, no. 4, pp. 1–35, 2021.
  2. S. Ali, O. Abusabha, F. Ali, M. Imran, and T. Abuhmed, “Effective multitask deep learning for iot malware detection and identification using behavioral traffic analysis,” IEEE Transactions on Network and Service Management, 2022.
  3. A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “A survey on adversarial attacks and defences,” CAAI Transactions on Intelligence Technology, vol. 6, no. 1, pp. 25–45, 2021.
  4. E. Abdukhamidov, M. Abuhamad, S. S. Woo, E. Chan-Tin, and T. Abuhmed, “Hardening interpretable deep learning systems: Investigating adversarial threats and defenses,” IEEE Transactions on Dependable and Secure Computing, 2023.
  5. F. Juraev, E. Abdukhamidov, M. Abuhamad, and T. Abuhmed, “Depth, breadth, and complexity: Ways to attack and defend deep learning models,” in Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, 2022, pp. 1207–1209.
  6. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in International Conference on Learning Representations, ICLR 2014, Banff, Canada, 2014, pp. 1–10.
  7. M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. Tygar, “Can machine learning be secure?” in Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security.   Association for Computing Machinery, 2006, p. 16–25. [Online]. Available: 10.1145/1128817.1128824
  8. N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proceedings of the 2017 ACM on Asia conference on computer and communications security, 2017, pp. 506–519.
  9. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA.   OpenReview.net, 2015.
  10. A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France.   OpenReview.net, 2017.
  11. N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 ieee symposium on security and privacy (sp).   IEEE, 2017, pp. 39–57.
  12. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
  13. F. Boenisch, P. Sperl, and K. Böttinger, “Gradient masking and the underestimated robustness threats of differential privacy in deep learning,” arXiv preprint arXiv:2105.07985, 2021.
  14. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 31, no. 1, 2017.
  15. A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan et al., “Searching for mobilenetv3,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1314–1324.
  16. S. Liu, X. Li, Y. Zhai, C. You, Z. Zhu, C. Fernandez-Granda, and Q. Qu, “Convolutional normalization: Improving deep convolutional network robustness and training,” Advances in neural information processing systems, vol. 34, pp. 28 919–28 928, 2021.
  17. S. Amini and S. Ghaemmaghami, “Towards improving robustness of deep neural networks to adversarial perturbations,” IEEE Transactions on Multimedia, vol. 22, no. 7, pp. 1889–1903, 2020.
  18. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  19. M. Wang, B. Liu, and H. Foroosh, “Factorized convolutional neural networks,” in Proceedings of the IEEE international conference on computer vision workshops, 2017, pp. 545–553.
  20. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
  21. E. Abdukhamidov, F. Juraev, M. Abuhamad, and T. Abuhmed, “Black-box and target-specific attack against interpretable deep learning systems,” in Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, 2022, pp. 1216–1218.
  22. E. Abdukhamidov, M. Abuhamad, F. Juraev, E. Chan-Tin, and T. AbuHmed, “Advedge: Optimizing adversarial perturbations against interpretable deep learning,” in International Conference on Computational Data and Social Networks, 2021, pp. 93–105.
  23. E. Abdukhamidov, M. Abuhamad, S. S. Woo, E. Chan-Tin, and T. Abuhmed, “Interpretations cannot be trusted: Stealthy and effective adversarial perturbations against interpretable deep learning,” arXiv preprint arXiv:2211.15926, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Firuz Juraev (3 papers)
  2. Mohammed Abuhamad (14 papers)
  3. Simon S. Woo (42 papers)
  4. Tamer Abuhmed (8 papers)
  5. George K Thiruvathukal (1 paper)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com