Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainable and Transferable Adversarial Attack for ML-Based Network Intrusion Detectors (2401.10691v1)

Published 19 Jan 2024 in cs.CR

Abstract: espite being widely used in network intrusion detection systems (NIDSs), ML has proven to be highly vulnerable to adversarial attacks. White-box and black-box adversarial attacks of NIDS have been explored in several studies. However, white-box attacks unrealistically assume that the attackers have full knowledge of the target NIDSs. Meanwhile, existing black-box attacks can not achieve high attack success rate due to the weak adversarial transferability between models (e.g., neural networks and tree models). Additionally, neither of them explains why adversarial examples exist and why they can transfer across models. To address these challenges, this paper introduces ETA, an Explainable Transfer-based Black-Box Adversarial Attack framework. ETA aims to achieve two primary objectives: 1) create transferable adversarial examples applicable to various ML models and 2) provide insights into the existence of adversarial examples and their transferability within NIDSs. Specifically, we first provide a general transfer-based adversarial attack method applicable across the entire ML space. Following that, we exploit a unique insight based on cooperative game theory and perturbation interpretations to explain adversarial examples and adversarial transferability. On this basis, we propose an Important-Sensitive Feature Selection (ISFS) method to guide the search for adversarial examples, achieving stronger transferability and ensuring traffic-space constraints.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. Y. Mirsky, T. Doitshman, Y. Elovici, and A. Shabtai, “Kitsune: an ensemble of autoencoders for online network intrusion detection,” arXiv preprint arXiv:1802.09089, 2018.
  2. J. Holland, P. Schmitt, N. Feamster, and P. Mittal, “New directions in automated traffic analysis,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 3366–3383.
  3. P.-F. Marteau, “Random partitioning forest for point-wise and collective anomaly detection—application to network intrusion detection,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 2157–2172, 2021.
  4. R. Sommer and V. Paxson, “Outside the closed world: On using machine learning for network intrusion detection,” in 2010 IEEE symposium on security and privacy.   IEEE, 2010, pp. 305–316.
  5. W. Chen, Z. Wang, Y. Zhong, D. Han, C. Duan, X. Yin, J. Yang, and X. Shi, “Adsim: network anomaly detection via similarity-aware heterogeneous ensemble learning,” in 2021 IFIP/IEEE International Symposium on Integrated Network Management (IM).   IEEE, 2021, pp. 608–612.
  6. C. Fu, Q. Li, M. Shen, and K. Xu, “Realtime robust malicious traffic detection via frequency domain analysis,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 3431–3446.
  7. M. Rigaki, “Adversarial deep learning against intrusion detection classifiers,” 2017.
  8. Z. Wang, “Deep learning-based intrusion detection with adversaries,” IEEE Access, vol. 6, pp. 38 367–38 384, 2018.
  9. A. Hartl, M. Bachl, J. Fabini, and T. Zseby, “Explainability and adversarial robustness for rnns,” in 2020 IEEE Sixth International Conference on Big Data Computing Service and Applications (BigDataService).   IEEE, 2020, pp. 148–156.
  10. W. Huang, X. Peng, Z. Shi, and Y. Ma, “Adversarial attack against lstm-based ddos intrusion detection system,” in 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI).   IEEE, 2020, pp. 686–693.
  11. J. Clements, Y. Yang, A. Sharma, H. Hu, and Y. Lao, “Rallying adversarial techniques against deep learning for network security,” arXiv preprint arXiv:1903.11688, 2019.
  12. X. Peng, W. Huang, and Z. Shi, “Adversarial attack against dos intrusion detection: An improved boundary-based method,” in 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI).   IEEE, 2019, pp. 1288–1295.
  13. H. Qiu, T. Dong, T. Zhang, J. Lu, G. Memmi, and M. Qiu, “Adversarial attacks against network intrusion detection in iot systems,” IEEE Internet of Things Journal, 2020.
  14. E. Alhajjar, P. Maxwell, and N. D. Bastian, “Adversarial machine learning in network intrusion detection systems,” arXiv preprint arXiv:2004.11898, 2020.
  15. F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” ICLR’18, 2017.
  16. Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with momentum,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 9185–9193.
  17. D. Wu, Y. Wang, S.-T. Xia, J. Bailey, and X. Ma, “Skip connections matter: On the transferability of adversarial examples generated with resnets,” arXiv preprint arXiv:2002.05990, 2020.
  18. X. Wang, J. Ren, S. Lin, X. Zhu, Y. Wang, and Q. Zhang, “A unified approach to interpreting and boosting adversarial transferability,” arXiv preprint arXiv:2010.04055, 2020.
  19. I. Sharafaldin, A. H. Lashkari, and A. A. Ghorbani, “Toward generating a new intrusion detection dataset and intrusion traffic characterization.” ICISSp, vol. 1, pp. 108–116, 2018.
  20. R. Patil, H. Dudeja, and C. Modi, “Designing an efficient security framework for detecting intrusions in virtual network of cloud computing,” Computers & Security, vol. 85, pp. 402–422, 2019.
  21. A. Rashid, M. J. Siddique, and S. M. Ahmed, “Machine and deep learning based comparative analysis using hybrid approaches for intrusion detection system,” in 2020 3rd International Conference on Advancements in Computational Sciences (ICACS).   IEEE, 2020, pp. 1–9.
  22. A. Verma and V. Ranga, “Statistical analysis of cidds-001 dataset for network intrusion detection systems using distance-based machine learning,” Procedia Computer Science, vol. 125, pp. 709–716, 2018.
  23. K. Yang, J. Liu, C. Zhang, and Y. Fang, “Adversarial examples against the deep learning based network intrusion detection systems,” in MILCOM 2018-2018 IEEE Military Communications Conference (MILCOM).   IEEE, 2018, pp. 559–564.
  24. Y. Zhou, G. Cheng, S. Jiang, and M. Dai, “Building an efficient intrusion detection system based on feature selection and ensemble classifier,” Computer networks, vol. 174, p. 107247, 2020.
  25. Z. Zhang, C. Kang, G. Xiong, and Z. Li, “Deep forest with lrrs feature for fine-grained website fingerprinting with encrypted ssl/tls,” in Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2019, pp. 851–860.
  26. R. Vinayakumar, M. Alazab, K. Soman, P. Poornachandran, A. Al-Nemrat, and S. Venkatraman, “Deep learning approach for intelligent intrusion detection system,” Ieee Access, vol. 7, pp. 41 525–41 550, 2019.
  27. M. Gao, L. Ma, H. Liu, Z. Zhang, Z. Ning, and J. Xu, “Malicious network traffic detection based on deep neural networks and association analysis,” Sensors, vol. 20, no. 5, p. 1452, 2020.
  28. B.-E. Zolbayar, R. Sheatsley, P. McDaniel, M. J. Weisman, S. Zhu, S. Zhu, and S. Krishnamurthy, “Generating practical adversarial network traffic flows using nidsgan,” arXiv preprint arXiv:2203.06694, 2022.
  29. C. Liu, Z. Cao, G. Xiong, G. Gou, S.-M. Yiu, and L. He, “Mampf: Encrypted traffic classification based on multi-attribute markov probability fingerprints,” in 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS).   IEEE, 2018, pp. 1–10.
  30. C. Liu, L. He, G. Xiong, Z. Cao, and Z. Li, “Fs-net: A flow sequence network for encrypted traffic classification,” in IEEE INFOCOM 2019-IEEE Conference on Computer Communications.   IEEE, 2019, pp. 1171–1179.
  31. Z. Lin, Y. Shi, and Z. Xue, “Idsgan: Generative adversarial networks for attack generation against intrusion detection,” arXiv preprint arXiv:1809.02077, 2018.
  32. D. Wu, B. Fang, J. Wang, Q. Liu, and X. Cui, “Evading machine learning botnet detection models via deep reinforcement learning,” in ICC 2019-2019 IEEE International Conference on Communications (ICC).   IEEE, 2019, pp. 1–6.
  33. G. Apruzzese and M. Colajanni, “Evading botnet detectors based on flows and random forest with adversarial samples,” in 2018 IEEE 17th International Symposium on Network Computing and Applications (NCA).   IEEE, 2018, pp. 1–8.
  34. D. Han, Z. Wang, Y. Zhong, W. Chen, J. Yang, S. Lu, X. Shi, and X. Yin, “Evaluating and improving adversarial robustness of machine learning-based network intrusion detectors,” IEEE Journal on Selected Areas in Communications, 2021.
  35. M. Nasr, A. Bahramali, and A. Houmansadr, “Defeating dnn-based traffic analysis systems in real-time with blind adversarial perturbations,” in 30th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 21), 2021.
  36. N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in machine learning: from phenomena to black-box attacks using adversarial samples,” arXiv preprint arXiv:1605.07277, 2016.
  37. N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against deep learning systems using adversarial examples (2016),” ArXiv e-prints.
  38. L. Wu, Z. Zhu, C. Tai et al., “Understanding and enhancing the transferability of adversarial examples,” arXiv preprint arXiv:1802.09707, 2018.
  39. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
  40. P. Tabacof and E. Valle, “Exploring the space of adversarial images,” in 2016 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2016, pp. 426–433.
  41. S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial perturbations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1765–1773.
  42. Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial examples and black-box attacks,” arXiv preprint arXiv:1611.02770, 2016.
  43. A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, and F. Roli, “Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks,” in 28th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 19), 2019, pp. 321–338.
  44. A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry, “Adversarial examples are not bugs, they are features,” arXiv preprint arXiv:1905.02175, 2019.
  45. J. Wang, T. Zhang, S. Liu, P.-Y. Chen, J. Xu, M. Fardad, and B. Li, “Towards a unified min-max framework for adversarial exploration and robustness,” arXiv preprint arXiv:1906.03563, 2019.
  46. C.-H. Chang, E. Creager, A. Goldenberg, and D. Duvenaud, “Explaining image classifiers by counterfactual generation,” arXiv preprint arXiv:1807.08024, 2018.
  47. R. Fong, M. Patrick, and A. Vedaldi, “Understanding deep networks via extremal perturbations and smooth masks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2950–2958.
  48. R. C. Fong and A. Vedaldi, “Interpretable explanations of black boxes by meaningful perturbation,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 3429–3437.
  49. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proceedings of the 31st international conference on neural information processing systems, 2017, pp. 4768–4777.
  50. I. Covert, S. Lundberg, and S.-I. Lee, “Understanding global feature contributions with additive importance measures,” arXiv preprint arXiv:2004.00668, 2020.
  51. S. Ghadimi and G. Lan, “Stochastic first-and zeroth-order methods for nonconvex stochastic programming,” SIAM Journal on Optimization, vol. 23, no. 4, pp. 2341–2368, 2013.
  52. X. Lian, H. Zhang, C.-J. Hsieh, Y. Huang, and J. Liu, “A comprehensive linear speedup analysis for asynchronous stochastic parallel optimization from zeroth-order to first-order,” Advances in Neural Information Processing Systems, vol. 29, pp. 3054–3062, 2016.
  53. P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh, “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” in Proceedings of the 10th ACM workshop on artificial intelligence and security, 2017, pp. 15–26.
  54. J. Chen, M. I. Jordan, and M. J. Wainwright, “Hopskipjumpattack: A query-efficient decision-based attack,” in 2020 ieee symposium on security and privacy (sp).   IEEE, 2020, pp. 1277–1294.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hangsheng Zhang (3 papers)
  2. Dongqi Han (27 papers)
  3. Yinlong Liu (17 papers)
  4. Zhiliang Wang (13 papers)
  5. Jiyan Sun (2 papers)
  6. Shangyuan Zhuang (1 paper)
  7. Jiqiang Liu (27 papers)
  8. Jinsong Dong (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.