Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Eclectic Rule Extraction for Explainability of Deep Neural Network based Intrusion Detection Systems (2401.10207v1)

Published 18 Jan 2024 in cs.CR, cs.AI, and cs.LG

Abstract: This paper addresses trust issues created from the ubiquity of black box algorithms and surrogate explainers in Explainable Intrusion Detection Systems (X-IDS). While Explainable Artificial Intelligence (XAI) aims to enhance transparency, black box surrogate explainers, such as Local Interpretable Model-Agnostic Explanation (LIME) and SHapley Additive exPlanation (SHAP), are difficult to trust. The black box nature of these surrogate explainers makes the process behind explanation generation opaque and difficult to understand. To avoid this problem, one can use transparent white box algorithms such as Rule Extraction (RE). There are three types of RE algorithms: pedagogical, decompositional, and eclectic. Pedagogical methods offer fast but untrustworthy white-box explanations, while decompositional RE provides trustworthy explanations with poor scalability. This work explores eclectic rule extraction, which strikes a balance between scalability and trustworthiness. By combining techniques from pedagogical and decompositional approaches, eclectic rule extraction leverages the advantages of both, while mitigating some of their drawbacks. The proposed Hybrid X-IDS architecture features eclectic RE as a white box surrogate explainer for black box Deep Neural Networks (DNN). The presented eclectic RE algorithm extracts human-readable rules from hidden layers, facilitating explainable and trustworthy rulesets. Evaluations on UNSW-NB15 and CIC-IDS-2017 datasets demonstrate the algorithm's ability to generate rulesets with 99.9% accuracy, mimicking DNN outputs. The contributions of this work include the hybrid X-IDS architecture, the eclectic rule extraction algorithm applicable to intrusion detection datasets, and a thorough analysis of performance and explainability, demonstrating the trade-offs involved in rule extraction speed and accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Black box models for explainable artificial intelligence. Explainable AI: Foundations, Methodologies and Applications, 232:1, 2022.
  2. Achieving explainability of intrusion detection system by hybrid oracle-explainer approach. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2020.
  3. ”why should i trust you?” explaining the predictions of any classifier. 2016.
  4. A unified approach to interpreting model predictions. volume 2017-December, 2017.
  5. Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-based systems, 8(6):373–389, 1995.
  6. M Gethsiyal Augasta and T Kathirvalavakumar. Rule extraction from neural networks—a comparative study. In International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012), pages 404–408. IEEE, 2012.
  7. Explainable intrusion detection systems (x-ids): A survey of current methods, challenges, and opportunities. IEEE Access, 10:112392–112415, 2022.
  8. Creating an explainable intrusion detection system using self organizing maps. In 2022 IEEE Symposium Series on Computational Intelligence (SSCI), pages 404–412. IEEE, 2022.
  9. Dorothy E Denning. An intrusion-detection model. IEEE Transactions on software engineering, (2):222–232, 1987.
  10. Survey of intrusion detection systems: techniques, datasets and challenges. Cybersecurity, 2(1):1–22, 2019.
  11. Octavio Loyola-Gonzalez. Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view. IEEE access, 7:154096–154113, 2019.
  12. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):1–42, 2018.
  13. An improved ensemble based intrusion detection technique using xgboost. Transactions on emerging telecommunications technologies, 32(6):e4076, 2021.
  14. Classification of attacks using support vector machine (svm) on kddcup’99 ids database. In 2015 Fifth International Conference on Communication Systems and Network Technologies, pages 987–990. IEEE, 2015.
  15. A two-stage intrusion detection system with auto-encoder and lstms. Applied Soft Computing, 121:108768, 2022.
  16. A one-dimensional convolutional neural network (1d-cnn) based deep learning system for network intrusion detection. Applied Sciences, 12(16):7986, 2022.
  17. Scada intrusion detection scheme exploiting the fusion of modified decision tree and chi-square feature selection. Internet of Things, 21:100676, 2023.
  18. Darpa’s explainable artificial intelligence (xai) program. AI magazine, 40(2):44–58, 2019.
  19. Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, pages 193–209, 2019.
  20. Explainable intrusion detection systems using competitive learning techniques. arXiv preprint arXiv:2303.17387, 2023.
  21. Extracting tree-structured representations of trained networks. Advances in neural information processing systems, 8, 1995.
  22. Neural network explanation using inversion. Neural networks, 20(1):78–93, 2007.
  23. Deepred–rule extraction from deep neural networks. In Discovery Science: 19th International Conference, DS 2016, Bari, Italy, October 19–21, 2016, Proceedings 19, pages 457–473. Springer, 2016.
  24. Efficient decompositional rule extraction for deep neural networks. arXiv preprint arXiv:2111.12628, 2021.
  25. Two-stage intrusion detection system in intelligent transportation systems using rule extraction methods from deep neural networks. IEEE Transactions on Intelligent Transportation Systems, 2022.
  26. Tameru Hailesilassie. Rule extraction algorithm for deep neural networks: A review. arXiv preprint arXiv:1610.05267, 2016.
  27. A detailed analysis of cicids2017 dataset for designing intrusion detection systems. International Journal of Engineering & Technology, 7:479–482, 3 2018.
  28. Unsw-nb15: a comprehensive data set for network intrusion detection systems (unsw-nb15 network data set). In 2015 military communications and information systems conference (MilCIS), pages 1–6. IEEE, 2015.
  29. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jesse Ables (5 papers)
  2. Nathaniel Childers (1 paper)
  3. William Anderson (19 papers)
  4. Sudip Mittal (66 papers)
  5. Shahram Rahimi (36 papers)
  6. Ioana Banicescu (8 papers)
  7. Maria Seale (12 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com