Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning (2306.05494v2)
Abstract: Machine Learning (ML) has become ubiquitous, and its deployment in Network Intrusion Detection Systems (NIDS) is inevitable due to its automated nature and high accuracy compared to traditional models in processing and classifying large volumes of data. However, ML has been found to have several flaws, most importantly, adversarial attacks, which aim to trick ML models into producing faulty predictions. While most adversarial attack research focuses on computer vision datasets, recent studies have explored the suitability of these attacks against ML-based network security entities, especially NIDS, due to the wide difference between different domains regarding the generation of adversarial attacks. To further explore the practicality of adversarial attacks against ML-based NIDS in-depth, this paper presents three distinct contributions: identifying numerous practicality issues for evasion adversarial attacks on ML-NIDS using an attack tree threat model, introducing a taxonomy of practicality issues associated with adversarial attacks against ML-based NIDS, and investigating how the dynamicity of some real-world ML models affects adversarial attacks against NIDS. Our experiments indicate that continuous re-training, even without adversarial training, can reduce the effectiveness of adversarial attacks. While adversarial attacks can compromise ML-based NIDSs, our aim is to highlight the significant gap between research and real-world practicality in this domain, warranting attention.
- N. Papernot, P. McDaniel, A. Sinha, and M. P. Wellman, “Sok: Security and privacy in machine learning,” in 2018 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2018, pp. 399–414.
- S. Kapoor and A. Narayanan, “Leakage and the reproducibility crisis in ml-based science,” 2022. [Online]. Available: https://arxiv.org/abs/2207.07048
- G. Apruzzese, H. Anderson, S. Dambra, D. Freeman, F. Pierazzi, and K. Roundy, “Position:“real attackers don’t compute gradients”: Bridging the gap between adversarial ml research and practice,” in IEEE Conference on Secure and Trustworthy Machine Learning. IEEE, 2022.
- G. Apruzzese, M. Andreolini, L. Ferretti, M. Marchetti, and M. Colajanni, “Modeling realistic adversarial attacks against network intrusion detection systems,” Digital Threats: Research and Practice (DTRAP), vol. 3, no. 3, pp. 1–19, 2022.
- M. A. Merzouk, F. Cuppens, N. Boulahia-Cuppens, and R. Yaich, “Investigating the practicality of adversarial evasion attacks on network intrusion detection,” Annals of Telecommunications, pp. 1–13, 2022.
- M. El Shehaby and A. Matrawy, “The impact of dynamic learning on adversarial attacks in networks (ieee cns 23 poster),” in 2023 IEEE Conference on Communications and Network Security (CNS). IEEE, 2023, pp. 1–2.
- P. Casas, P. Mulinka, and J. Vanerio, “Should i (re) learn or should i go (on)? stream machine learning for adaptive defense against network attacks,” in Proceedings of the 6th ACM Workshop on Moving Target Defense, 2019, pp. 79–88.
- Ó. Fontenla-Romero, B. Guijarro-Berdiñas, D. Martinez-Rego, B. Pérez-Sánchez, and D. Peteiro-Barral, “Online machine learning,” in Efficiency and Scalability Methods for Computational Intellect. IGI Global, 2013, pp. 27–54.
- V. Kumar and O. P. Sangwan, “Signature based intrusion detection system using snort,” International Journal of Computer Applications & Information Technology, vol. 1, no. 3, pp. 35–41, 2012.
- V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: A survey,” ACM computing surveys (CSUR), vol. 41, no. 3, pp. 1–58, 2009.
- Z. Ahmad, A. Shahid Khan, C. Wai Shiang, J. Abdullah, and F. Ahmad, “Network intrusion detection system: A systematic study of machine learning and deep learning approaches,” Transactions on Emerging Telecommunications Technologies, vol. 32, no. 1, p. e4150, 2021.
- J. Lu, A. Liu, F. Dong, F. Gu, J. Gama, and G. Zhang, “Learning under concept drift: A review,” IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 12, pp. 2346–2363, 2018.
- A. Bifet and R. Gavalda, “Learning from time-changing data with adaptive windowing,” in Proceedings of the 2007 SIAM international conference on data mining. SIAM, 2007, pp. 443–448.
- C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
- I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
- O. Ibitoye, O. Shafiq, and A. Matrawy, “Analyzing adversarial attacks against deep learning for intrusion detection in iot networks,” in 2019 IEEE global communications conference (GLOBECOM). IEEE, 2019, pp. 1–6.
- T. D. Nguyen, P. Rieger, M. Miettinen, and A.-R. Sadeghi, “Poisoning attacks on federated learning-based iot intrusion detection system,” in Proc. Workshop Decentralized IoT Syst. Secur.(DISS), 2020, pp. 1–7.
- Y. Gao, B. G. Doan, Z. Zhang, S. Ma, J. Zhang, A. Fu, S. Nepal, and H. Kim, “Backdoor attacks and countermeasures on deep learning: A comprehensive review,” arXiv preprint arXiv:2007.10760, 2020.
- O. Ibitoye, R. Abou-Khamis, A. Matrawy, and M. O. Shafiq, “The threat of adversarial attacks on machine learning in network security–a survey,” arXiv preprint arXiv:1911.02621, 2019.
- R. Abou Khamis and A. Matrawy, “Evaluation of adversarial training on different types of neural networks in deep learning-based idss,” in 2020 international symposium on networks, computers and communications (ISNCC). IEEE, 2020, pp. 1–6.
- R. Abou Khamis, M. O. Shafiq, and A. Matrawy, “Investigating resistance of deep learning-based ids against adversaries using min-max optimization,” in ICC 2020-2020 IEEE International Conference on Communications (ICC). IEEE, 2020, pp. 1–7.
- F. Zhang, P. P. Chan, B. Biggio, D. S. Yeung, and F. Roli, “Adversarial feature selection against evasion attacks,” IEEE transactions on cybernetics, vol. 46, no. 3, pp. 766–777, 2015.
- H. A. Alatwi and C. Morisset, “Adversarial machine learning in network intrusion detection domain: A systematic review,” arXiv preprint arXiv:2112.03315, 2021.
- K. He, D. D. Kim, and M. R. Asghar, “Adversarial machine learning for network intrusion detection systems: a comprehensive survey,” IEEE Communications Surveys & Tutorials, 2023.
- N. Martins, J. M. Cruz, T. Cruz, and P. H. Abreu, “Adversarial machine learning applied to intrusion and malware scenarios: a systematic review,” IEEE Access, vol. 8, pp. 35 403–35 419, 2020.
- J. Vitorino, I. Praça, and E. Maia, “Sok: Realistic adversarial attacks and defenses for intelligent network intrusion detection,” Computers & Security, p. 103433, 2023.
- N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proceedings of the 2017 ACM on Asia conference on computer and communications security, 2017, pp. 506–519.
- A. Kuppa, S. Grzonkowski, M. R. Asghar, and N.-A. Le-Khac, “Black box attacks on deep anomaly detectors,” in Proceedings of the 14th International Conference on Availability, Reliability and Security, 2019, pp. 1–10.
- R. Sheatsley, N. Papernot, M. J. Weisman, G. Verma, and P. McDaniel, “Adversarial examples for network intrusion detection systems,” Journal of Computer Security, vol. 30, no. 5, pp. 727–752, 2022.
- F. Pierazzi, F. Pendlebury, J. Cortellazzi, and L. Cavallaro, “Intriguing properties of adversarial ml attacks in the problem space,” in 2020 IEEE symposium on security and privacy (SP). IEEE, 2020, pp. 1332–1349.
- I. Rosenberg, A. Shabtai, Y. Elovici, and L. Rokach, “Adversarial machine learning attacks and defense methods in the cyber security domain,” ACM Computing Surveys (CSUR), vol. 54, no. 5, pp. 1–36, 2021.
- I. Sharafaldin, A. H. Lashkari, and A. A. Ghorbani, “Toward generating a new intrusion detection dataset and intrusion traffic characterization.” ICISSp, vol. 1, pp. 108–116, 2018.
- G. Karatas, O. Demir, and O. K. Sahingoz, “Increasing the performance of machine learning-based idss on an imbalanced and up-to-date dataset,” IEEE Access, vol. 8, pp. 32 150–32 162, 2020.
- L. Liu, G. Engelen, T. Lynar, D. Essam, and W. Joosen, “Error prevalence in nids datasets: A case study on cic-ids-2017 and cse-cic-ids-2018,” in 2022 IEEE Conference on Communications and Network Security (CNS). IEEE, 2022, pp. 254–262.
- E. Ferrari, P. Bosco, S. Calderoni, P. Oliva, L. Palumbo, G. Spera, M. E. Fantacci, and A. Retico, “Dealing with confounders and outliers in classification medical studies: The autism spectrum disorders case study,” Artificial Intelligence in Medicine, vol. 108, p. 101926, 2020.
- H. Kaur, H. S. Pannu, and A. K. Malhi, “A systematic review on imbalanced data challenges in machine learning: Applications and solutions,” ACM Computing Surveys (CSUR), vol. 52, no. 4, pp. 1–36, 2019.
- B. Liu and G. Tsoumakas, “Dealing with class imbalance in classifier chains via random undersampling,” Knowledge-Based Systems, vol. 192, p. 105292, 2020.
- M.-I. Nicolae, M. Sinn, M. N. Tran, B. Buesser, A. Rawat, M. Wistuba, V. Zantedeschi, N. Baracaldo, B. Chen, H. Ludwig et al., “Adversarial robustness toolbox v1. 0.0,” arXiv preprint arXiv:1807.01069, 2018.
- A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
- A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Artificial intelligence safety and security. Chapman and Hall/CRC, 2018, pp. 99–112.
- J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,” Proceedings of the national academy of sciences, vol. 114, no. 13, pp. 3521–3526, 2017.
- N. Carlini and D. Wagner, “Adversarial examples are not easily detected: Bypassing ten detection methods,” in Proceedings of the 10th ACM workshop on artificial intelligence and security, 2017, pp. 3–14.
- M. Migut, M. Worring, and C. J. Veenman, “Visualizing multi-dimensional decision boundaries in 2d,” Data Mining and Knowledge Discovery, vol. 29, pp. 273–295, 2015.
- L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.
- B. Heo, M. Lee, S. Yun, and J. Y. Choi, “Knowledge distillation with adversarial samples supporting decision boundary,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 3771–3778.
- X. Cao and N. Z. Gong, “Mitigating evasion attacks to deep neural networks via region-based classification,” in Proceedings of the 33rd Annual Computer Security Applications Conference, 2017, pp. 278–287.
- P. C. van Oorschot and P. C. van Oorschot, “Intrusion detection and network-based attacks,” Computer Security and the Internet: Tools and Jewels from Malware to Bitcoin, pp. 309–338, 2021.
- G. Apruzzese, P. Laskov, and A. Tastemirova, “Sok: The impact of unlabelled data in cyberthreat detection,” arXiv preprint arXiv:2205.08944, 2022.
- Mohamed el Shehaby (2 papers)
- Ashraf Matrawy (36 papers)