Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey on Explainable Artificial Intelligence for Cybersecurity (2303.12942v2)

Published 7 Mar 2023 in cs.CR, cs.AI, and cs.NI

Abstract: The black-box nature of AI models has been the source of many concerns in their use for critical applications. Explainable Artificial Intelligence (XAI) is a rapidly growing research field that aims to create machine learning models that can provide clear and interpretable explanations for their decisions and actions. In the field of network cybersecurity, XAI has the potential to revolutionize the way we approach network security by enabling us to better understand the behavior of cyber threats and to design more effective defenses. In this survey, we review the state of the art in XAI for cybersecurity in network systems and explore the various approaches that have been proposed to address this important problem. The review follows a systematic classification of network-driven cybersecurity threats and issues. We discuss the challenges and limitations of current XAI methods in the context of cybersecurity and outline promising directions for future research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (108)
  1. C. M. Gevaert, “Explainable AI for earth observation: A review including societal and regulatory perspectives,” International Journal of Applied Earth Observation and Geoinformation, vol. 112, p. 102869, 2022.
  2. I. P. Kumara, M. Ariz, M. B. Chhetri, M. Mohammadi, W.-J. Van Den Heuvel, and D. A. A. Tamburri, “Focloud: feature model guided performance prediction and explanation for deployment configurable cloud applications,” IEEE Transactions on Services Computing, 2022.
  3. A. Erasmus, T. D. Brunet, and E. Fisher, “What is interpretability?” Philosophy & Technology, vol. 34, no. 4, pp. 833–862, 2021.
  4. M. Carletti, M. Terzi, and G. A. Susto, “Interpretable anomaly detection with diffi: Depth-based feature importance of isolation forest,” Engineering Applications of Artificial Intelligence, vol. 119, p. 105730, 2023.
  5. S. Polley, R. R. Koparde, A. B. Gowri, M. Perera, and A. Nuernberger, “Towards trustworthiness in the context of explainable search,” in Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021, pp. 2580–2584.
  6. X. Yan, Y. Xu, X. Xing, B. Cui, Z. Guo, and T. Guo, “Trustworthy network anomaly detection based on an adaptive learning rate and momentum in iiot,” IEEE Transactions on Industrial Informatics, vol. 16, no. 9, pp. 6182–6192, 2020.
  7. G. Rjoub, J. Bentahar, and O. A. Wahab, “Bigtrustscheduling: Trust-aware big data task scheduling approach in cloud computing environments,” Future Generation Computer Systems, vol. 110, pp. 1079–1097, 2020.
  8. O. A. Wahab, R. Cohen, J. Bentahar, H. Otrok, A. Mourad, and G. Rjoub, “An endorsement-based trust bootstrapping approach for newcomer cloud services,” Information Sciences, vol. 527, pp. 159–175, 2020.
  9. N. Drawel, J. Bentahar, A. Laarej, and G. Rjoub, “Formalizing group and propagated trust in multi-agent systems,” in Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, 2021, pp. 60–66.
  10. O. A. Wahab, G. Rjoub, J. Bentahar, and R. Cohen, “Federated against the cold: A trust-based federated learning approach to counter the cold start problem in recommendation systems,” Information Sciences, vol. 601, pp. 189–206, 2022.
  11. G. Alwhishi, J. Bentahar, and A. Elwhishi, “Three-valued model checking smart contract systems with trust under uncertainty,” in The International Conference on Deep Learning, Big Data and Blockchain (DBB), Rome, Italy, 22-24 August, ser. Lecture Notes in Networks and Systems, I. Awan, M. Younas, J. Bentahar, and S. Benbernou, Eds., vol. 541.   Springer, 2022, pp. 119–133.
  12. J. Bentahar, N. Drawel, and A. Sadiki, “Quantitative group trust: A two-stage verification approach,” in 21st International Conference on Autonomous Agents and Multiagent Systems, AAMAS, Auckland, New Zealand, May 9-13, P. Faliszewski, V. Mascardi, C. Pelachaud, and M. E. Taylor, Eds.   International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2022, pp. 100–108.
  13. N. Drawel, A. Laarej, J. Bentahar, and M. El-Menshawy, “Transformation-based model checking temporal trust in multi-agent systems,” J. Syst. Softw., vol. 192, p. 111383, 2022.
  14. A. Mousa, J. Bentahar, and O. Alam, “Multi-dimensional trust for context-aware services computing,” Expert Syst. Appl., vol. 172, p. 114592, 2021.
  15. K. Sokol and P. Flach, “One explanation does not fit all,” KI-Künstliche Intelligenz, vol. 34, no. 2, pp. 235–250, 2020.
  16. Z. Cui, X. Fei, S. Zhang, X. Cai, Y. Cao, W. Zhang, and J. Chen, “A hybrid blockchain-based identity authentication scheme for multi-wsn,” IEEE Transactions on Services Computing, vol. 13, no. 2, pp. 241–251, 2020.
  17. H. Ali, M. S. Khan, A. Al-Fuqaha, and J. Qadir, “Tamp-x: Attacking explainable natural language classifiers through tampered activations,” Computers & Security, vol. 120, p. 102791, 2022.
  18. B. Li, Q. He, G. Cui, X. Xia, F. Chen, H. Jin, and Y. Yang, “Read: Robustness-oriented edge application deployment in edge computing environment,” IEEE Transactions on Services Computing, vol. 15, no. 3, pp. 1746–1759, 2020.
  19. S. Valluripally, B. Frailey, B. Kruse, B. Palipatana, R. Oruche, A. Gulhane, K. A. Hoque, and P. Calyam, “Detection of security and privacy attacks disrupting user immersive experience in virtual reality learning environments,” IEEE Transactions on Services Computing, 2022.
  20. G. Alicioglu and B. Sun, “A survey of visual analytics for explainable artificial intelligence methods,” Computers & Graphics, vol. 102, pp. 502–520, 2022.
  21. E. Tjoa and C. Guan, “A survey on explainable artificial intelligence (XAI): Toward medical XAI,” IEEE transactions on neural networks and learning systems, vol. 32, no. 11, pp. 4793–4813, 2020.
  22. G. Srivastava, R. H. Jhaveri, S. Bhattacharya, S. Pandya, P. K. R. Maddikunta, G. Yenduri, J. G. Hall, M. Alazab, T. R. Gadekallu et al., “Xai for cybersecurity: state of the art, challenges, open issues and future directions,” arXiv preprint arXiv:2206.03585, 2022.
  23. N. Capuano, G. Fenza, V. Loia, and C. Stanzione, “Explainable artificial intelligence in cybersecurity: A survey,” IEEE Access, vol. 10, pp. 93 575–93 600, 2022.
  24. F. Charmet, H. C. Tanuwidjaja, S. Ayoubi, P.-F. Gimenez, Y. Han, H. Jmila, G. Blanc, T. Takahashi, and Z. Zhang, “Explainable artificial intelligence for cybersecurity: a literature survey,” Annals of Telecommunications, pp. 1–24, 2022.
  25. Z. Zhang, H. Al Hamadi, E. Damiani, C. Y. Yeun, and F. Taher, “Explainable artificial intelligence applications in cyber security: State-of-the-art in research,” IEEE Access, 2022.
  26. A. Das and P. Rad, “Opportunities and challenges in explainable artificial intelligence (XAI): A survey,” arXiv preprint arXiv:2006.11371, 2020.
  27. M. Danilevsky, K. Qian, R. Aharonov, Y. Katsis, B. Kawas, and P. Sen, “A survey of the state of explainable AI for natural language processing,” arXiv preprint arXiv:2010.00711, 2020.
  28. X.-H. Li, C. C. Cao, Y. Shi, W. Bai, H. Gao, L. Qiu, C. Wang, Y. Gao, S. Zhang, X. Xue et al., “A survey of data-driven and knowledge-aware explainable AI,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 1, pp. 29–49, 2020.
  29. R. Confalonieri, L. Coba, B. Wagner, and T. R. Besold, “A historical perspective of explainable artificial intelligence,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 11, no. 1, p. e1391, 2021.
  30. S. Mohseni, N. Zarei, and E. D. Ragan, “A multidisciplinary survey and framework for design and evaluation of explainable AI systems,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 11, no. 3-4, pp. 1–45, 2021.
  31. P. Linardatos, V. Papastefanopoulos, and S. Kotsiantis, “Explainable AI: A review of machine learning interpretability methods,” Entropy, vol. 23, no. 1, p. 18, 2020.
  32. P. P. Angelov, E. A. Soares, R. Jiang, N. I. Arnold, and P. M. Atkinson, “Explainable artificial intelligence: an analytical review,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 11, no. 5, p. e1424, 2021.
  33. N. J. Van Eck and L. Waltman, “Vosviewer manual,” Leiden: Univeristeit Leiden, vol. 1, no. 1, pp. 1–53, 2013.
  34. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” Advances in neural information processing systems, vol. 30, 2017.
  35. P. Giudici and E. Raffinetti, “Shapley-lorenz explainable artificial intelligence,” Expert Systems with Applications, vol. 167, p. 114104, 2021.
  36. K. Aas, M. Jullum, and A. Løland, “Explaining individual predictions when features are dependent: More accurate approximations to shapley values,” Artificial Intelligence, vol. 298, p. 103502, 2021.
  37. N. Bussmann, P. Giudici, D. Marinelli, and J. Papenbrock, “Explainable machine learning in credit risk management,” Computational Economics, vol. 57, no. 1, pp. 203–216, 2021.
  38. I. Basu and S. Maji, “Multicollinearity correction and combined feature effect in shapley values,” in Australasian Joint Conference on Artificial Intelligence.   Springer, 2022, pp. 79–90.
  39. M. T. Ribeiro, S. Singh, and C. Guestrin, ““Why should I trust you?” explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, pp. 1135–1144.
  40. G. Loveleen, B. Mohan, B. S. Shikhar, J. Nz, M. Shorfuzzaman, and M. Masud, “Explanation-driven hci model to examine the mini-mental state for alzheimer’s disease,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 2022.
  41. M. S. Kamal, N. Dey, L. Chowdhury, S. I. Hasan, and K. Santosh, “Explainable AI for glaucoma prediction analysis to understand risk factors in treatment planning,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–9, 2022.
  42. C. M. Viana, M. Santos, D. Freire, P. Abrantes, and J. Rocha, “Evaluation of the factors explaining the use of agricultural land: a machine learning and model-agnostic approach,” Ecological Indicators, vol. 131, p. 108200, 2021.
  43. I. A. Khan, N. Moustafa, I. Razzak, M. Tanveer, D. Pi, Y. Pan, and B. S. Ali, “Xsru-iomt: Explainable simple recurrent units for threat detection in internet of medical things networks,” Future Generation Computer Systems, vol. 127, pp. 181–193, 2022.
  44. X. Huang, S. Jamonnak, Y. Zhao, T. H. Wu, and W. Xu, “A visual designer of layer-wise relevance propagation models,” in Computer Graphics Forum, vol. 40, no. 3.   Wiley Online Library, 2021, pp. 227–238.
  45. J. Sun, S. Lapuschkin, W. Samek, and A. Binder, “Explain and improve: Lrp-inference fine-tuning for image captioning models,” Information Fusion, vol. 77, pp. 233–246, 2022.
  46. S. Gholizadeh and N. Zhou, “Model explainability in deep learning based natural language processing,” arXiv preprint arXiv:2106.07410, 2021.
  47. A. Nguyen, F. Krause, D. Hagenmayer, and M. Färber, “Quantifying explanations of neural networks in e-commerce based on lrp,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases.   Springer, 2021, pp. 251–267.
  48. R. Achtibat, M. Dreyer, I. Eisenbraun, S. Bosse, T. Wiegand, W. Samek, and S. Lapuschkin, “From” where” to” what”: Towards human-understandable explanations through concept relevance propagation,” arXiv preprint arXiv:2206.03208, 2022.
  49. T. Idé, A. Dhurandhar, J. Navrátil, M. Singh, and N. Abe, “Anomaly attribution with likelihood compensation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 5, 2021, pp. 4131–4138.
  50. L. Utkin and A. Konstantinov, “An extension of the neural additive model for uncertainty explanation of machine learning survival models,” in Cyber-Physical Systems: Intelligent Models and Algorithms.   Springer, 2022, pp. 3–13.
  51. Q. Chen, G. Pan, W. Chen, and P. Wu, “A novel explainable deep belief network framework and its application for feature importance analysis,” IEEE Sensors Journal, vol. 21, no. 22, pp. 25 001–25 009, 2021.
  52. A. Nascita, A. Montieri, G. Aceto, D. Ciuonzo, V. Persico, and A. Pescapé, “Xai meets mobile traffic classification: Understanding and improving multimodal deep learning architectures,” IEEE Transactions on Network and Service Management, vol. 18, no. 4, pp. 4225–4246, 2021.
  53. C. Wu, H. Zhang, J. Chen, Z. Gao, P. Zhang, K. Muhammad, and J. Del Ser, “Vessel-GAN: Angiographic reconstructions from myocardial CT perfusion with explainable generative adversarial networks,” Future Generation Computer Systems, vol. 130, pp. 128–139, 2022.
  54. M. Zolanvari, Z. Yang, K. Khan, R. Jain, and N. Meskin, “TRUST XAI: Model-agnostic explanations for AI with a case study on IIoT security,” IEEE Internet of Things Journal, 2021.
  55. L. Antwarg, R. M. Miller, B. Shapira, and L. Rokach, “Explaining anomalies detected by autoencoders using shapley additive explanations,” Expert Systems with Applications, vol. 186, p. 115736, 2021.
  56. K. Zhang, J. Zhang, P.-D. Xu, T. Gao, and D. W. Gao, “Explainable AI in deep reinforcement learning models for power system emergency control,” IEEE Transactions on Computational Social Systems, 2021.
  57. M. Liu, J. Shi, K. Cao, J. Zhu, and S. Liu, “Analyzing the training processes of deep generative models,” IEEE transactions on visualization and computer graphics, vol. 24, no. 1, pp. 77–87, 2017.
  58. P. Y. Wang, S. Galhotra, R. Pradhan, and B. Salimi, “Demonstration of generating explanations for black-box algorithms using lewis,” Proceedings of the VLDB Endowment, vol. 14, no. 12, pp. 2787–2790, 2021.
  59. J. Peng, K. Zou, M. Zhou, Y. Teng, X. Zhu, F. Zhang, and J. Xu, “An explainable artificial intelligence framework for the deterioration risk prediction of hepatitis patients,” Journal of Medical Systems, vol. 45, no. 5, pp. 1–9, 2021.
  60. V. Nagisetty, L. Graves, J. Scott, and V. Ganesh, “xAI-GAN: Enhancing generative adversarial networks via explainable AI systems,” arXiv preprint arXiv:2002.10438, 2020.
  61. M. S. Kamal, A. Northcote, L. Chowdhury, N. Dey, R. G. Crespo, and E. Herrera-Viedma, “Alzheimer’s patient analysis using image and gene expression data and explainable-AI to present associated genes,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–7, 2021.
  62. M. Moradi and M. Samwald, “Post-hoc explanation of black-box classifiers using confident itemsets,” Expert Systems with Applications, vol. 165, p. 113941, 2021.
  63. M. Nourani, C. Roy, J. E. Block, D. R. Honeycutt, T. Rahman, E. Ragan, and V. Gogate, “Anchoring bias affects mental model formation and user reliance in explainable AI systems,” in 26th International Conference on Intelligent User Interfaces, 2021, pp. 340–350.
  64. M. Lee, J. Jeon, and H. Lee, “Explainable AI for domain experts: A post hoc analysis of deep learning for defect classification of TFT–LCD panels,” Journal of Intelligent Manufacturing, pp. 1–13, 2021.
  65. S. Knapič, A. Malhi, R. Saluja, and K. Främling, “Explainable artificial intelligence for human decision support system in the medical domain,” Machine Learning and Knowledge Extraction, vol. 3, no. 3, pp. 740–770, 2021.
  66. Á. Delgado-Panadero, B. Hernández-Lorca, M. T. García-Ordás, and J. A. Benítez-Andrades, “Implementing local-explainability in gradient boosting trees: Feature contribution,” Information Sciences, 2022.
  67. Y. Zhang, D. Defazio, and A. Ramesh, “Relex: A model-agnostic relational model explainer,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 1042–1049.
  68. T. Srinath and H. Gururaja, “Explainable machine learning in identifying credit card defaulters,” Global Transitions Proceedings, 2022.
  69. N. Prentzas, M. Pitsiali, E. Kyriacou, A. Nicolaides, A. Kakas, and C. S. Pattichis, “Model agnostic explainability techniques in ultrasound image analysis,” in 2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE).   IEEE, 2021, pp. 1–6.
  70. P. R. Magesh, R. D. Myloth, and R. J. Tom, “An explainable machine learning model for early detection of parkinson’s disease using lime on datscan imagery,” Computers in Biology and Medicine, vol. 126, p. 104041, 2020.
  71. Z. U. Ahmed, K. Sun, M. Shelly, and L. Mu, “Explainable artificial intelligence (XAI) for exploring spatial variability of lung and bronchus cancer (LBC) mortality rates in the contiguous USA,” Scientific reports, vol. 11, no. 1, pp. 1–15, 2021.
  72. T. Spinner, U. Schlegel, H. Schäfer, and M. El-Assady, “explainer: A visual analytics framework for interactive and explainable machine learning,” IEEE transactions on visualization and computer graphics, vol. 26, no. 1, pp. 1064–1074, 2019.
  73. A. Kuppa and N.-A. Le-Khac, “Adversarial xai methods in cybersecurity,” IEEE transactions on information forensics and security, vol. 16, pp. 4924–4938, 2021.
  74. S. Saifullah, D. Mercier, A. Lucieri, A. Dengel, and S. Ahmed, “Privacy meets explainability: A comprehensive impact benchmark,” arXiv preprint arXiv:2211.04110, 2022.
  75. Y. Wang, J. Lam, and H. Lin, “Consensus of linear multivariable discrete-time multiagent systems: Differential privacy perspective,” IEEE Transactions on Cybernetics, vol. 52, no. 12, pp. 13 915–13 926, 2022.
  76. O. A. Wahab, A. Mourad, H. Otrok, and T. Taleb, “Federated machine learning: Survey, multi-level classification, desirable criteria and future directions in communication and networking systems,” IEEE Communications Surveys & Tutorials, vol. 23, no. 2, pp. 1342–1397, 2021.
  77. G. Rjoub, O. A. Wahab, J. Bentahar, and A. Bataineh, “Trust-driven reinforcement selection strategy for federated learning on iot devices,” Computing, pp. 1–23, 2022.
  78. T. T. Huong, T. P. Bac, K. N. Ha, N. V. Hoang, N. X. Hoang, N. T. Hung, and K. P. Tran, “Federated learning-based explainable anomaly detection for industrial control systems,” IEEE Access, vol. 10, pp. 53 854–53 872, 2022.
  79. J. L. C. Bárcena, P. Ducange, A. Ercolani, F. Marcelloni, and A. Renda, “An approach to federated learning of explainable fuzzy regression models,” in 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).   IEEE, 2022, pp. 1–8.
  80. Z. Müftüoğlu, M. Kızrak, and T. Yıldırım, “Privacy-preserving mechanisms with explainability in assistive ai technologies,” Advances in Assistive Technologies: Selected Papers in Honour of Professor Nikolaos G. Bourbakis–Vol. 3, pp. 287–309, 2022.
  81. S. Milli, L. Schmidt, A. D. Dragan, and M. Hardt, “Model reconstruction from model explanations,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, pp. 1–9.
  82. F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction {{\{{APIs}}\}},” in 25th USENIX security symposium (USENIX Security 16), 2016, pp. 601–618.
  83. R. Shokri, M. Strobel, and Y. Zick, “Privacy risks of explaining machine learning models,” 2019.
  84. F. Harder, M. Bauer, and M. Park, “Interpretable and differentially private predictions,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 4083–4090.
  85. S. Löbner, W. B. Tesfay, T. Nakamura, and S. Pape, “Explainable machine learning for default privacy setting prediction,” IEEE Access, vol. 9, pp. 63 700–63 717, 2021.
  86. G. Rjoub, O. Abdel Wahab, J. Bentahar, and A. Bataineh, “A trust and energy-aware double deep reinforcement learning scheduling strategy for federated learning on IoT devices,” in International Conference on Service-Oriented Computing.   Springer, 2020, pp. 319–333.
  87. G. Rjoub, O. A. Wahab, J. Bentahar, and A. Bataineh, “Trust-driven reinforcement selection strategy for federated learning on IoT devices,” Computing, pp. 1–23, 2022.
  88. N. Drawel, J. Bentahar, A. Laarej, and G. Rjoub, “Formal verification of group and propagated trust in multi-agent systems,” Autonomous Agents and Multi-Agent Systems, vol. 36, no. 1, p. 19, 2022.
  89. R. Machlev, M. Perl, J. Belikov, K. Levy, and Y. Levron, “Measuring explainability and trustworthiness of power quality disturbances classifiers using XAI-explainable artificial intelligence,” IEEE Transactions on Industrial Informatics, 2021.
  90. A. Kuppa and N.-A. Le-Khac, “Black box attacks on explainable artificial intelligence (XAI) methods in cyber security,” in 2020 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2020, pp. 1–8.
  91. G. Rjoub, J. Bentahar, and O. A. Wahab, “Explainable AI-based federated deep reinforcement learning for trusted autonomous driving,” in 2022 International Wireless Communications and Mobile Computing (IWCMC).   IEEE, 2022, pp. 318–323.
  92. G. Rjoub, J. Bentahar, O. A. Wahab, and N. Drawel, “One-shot federated learning-based model-free reinforcement learning,” in The International Conference on Deep Learning, Big Data and Blockchain (DBB 2022).   Springer, 2022, pp. 39–52.
  93. G. Rjoub, J. Bentahar, and Y. Joarder, “Active federated yolor model for enhancing autonomous vehicles safety,” in Mobile Web and Intelligent Information Systems: 18th International Conference, MobiWIS 2022, Rome, Italy, August 22–24, 2022, Proceedings.   Springer, 2022, pp. 49–64.
  94. H. Mankodiya, M. S. Obaidat, R. Gupta, and S. Tanwar, “XAI-AV: Explainable artificial intelligence for trust management in autonomous vehicles,” in 2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI).   IEEE, 2021, pp. 1–5.
  95. H. Elayan, M. Aloqaily, and M. Guizani, “Internet of behavior (IoB) and explainable AI systems for influencing IoT behavior,” arXiv preprint arXiv:2109.07239, 2021.
  96. O. A. Wahab, “Intrusion detection in the IoT under data and concept drifts: Online deep learning approach,” IEEE Internet of Things Journal, vol. 9, no. 20, pp. 19 706–19 716, 2022.
  97. R. R. Karn, P. Kudva, H. Huang, S. Suneja, and I. M. Elfadel, “Cryptomining detection in container clouds using system calls and explainable machine learning,” IEEE Transactions on Parallel and Distributed Systems, vol. 32, no. 3, pp. 674–691, 2020.
  98. M. Wang, K. Zheng, Y. Yang, and X. Wang, “An explainable machine learning framework for intrusion detection systems,” IEEE Access, vol. 8, pp. 73 127–73 141, 2020.
  99. G. Baryannis, S. Dani, and G. Antoniou, “Predicting supply chain risks using machine learning: The trade-off between performance and interpretability,” Future Generation Computer Systems, vol. 101, pp. 993–1004, 2019.
  100. D. L. Aguilar, M. A. M. Perez, O. Loyola-Gonzalez, K.-K. R. Choo, and E. Bucheli-Susarrey, “Towards an interpretable autoencoder: a decision tree-based autoencoder and its application in anomaly detection,” IEEE Transactions on Dependable and Secure Computing, 2022.
  101. G. Iadarola, F. Martinelli, F. Mercaldo, and A. Santone, “Towards an interpretable deep learning model for mobile malware detection and family identification,” Computers & Security, vol. 105, p. 102198, 2021.
  102. D. L. Marino, C. S. Wickramasinghe, and M. Manic, “An adversarial approach for explainable AI in intrusion detection systems,” in IECON 2018-44th Annual Conference of the IEEE Industrial Electronics Society.   IEEE, 2018, pp. 3237–3243.
  103. A. Y. Al Hammadi, C. Y. Yeun, E. Damiani, P. D. Yoo, J. Hu, H. K. Yeun, and M.-S. Yim, “Explainable artificial intelligence to evaluate industrial internal security using EEG signals in IoT framework,” Ad Hoc Networks, vol. 123, p. 102641, 2021.
  104. C. Seibold, A. Hilsmann, and P. Eisert, “Focused LRP: Explainable AI for face morphing attack detection,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 88–96.
  105. W. Garcia, J. I. Choi, S. K. Adari, S. Jha, and K. R. Butler, “Explainable black-box attacks against model-based authentication,” arXiv preprint arXiv:1810.00024, 2018.
  106. R. Rocha, D. Carneiro, and P. Novais, “Continuous authentication with a focus on explainability,” Neurocomputing, vol. 423, pp. 697–702, 2021.
  107. M. AL-Essa, G. Andresini, A. Appice, and D. Malerba, “Xai to explore robustness of features in adversarial training for cybersecurity,” in International Symposium on Methodologies for Intelligent Systems.   Springer, 2022, pp. 117–126.
  108. S. Hariharan, R. Rejimol Robinson, R. R. Prasad, C. Thomas, and N. Balakrishnan, “Xai for intrusion detection system: comparing explanations based on global and local scope,” Journal of Computer Virology and Hacking Techniques, pp. 1–23, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Gaith Rjoub (6 papers)
  2. Jamal Bentahar (23 papers)
  3. Omar Abdel Wahab (8 papers)
  4. Rabeb Mizouni (12 papers)
  5. Alyssa Song (1 paper)
  6. Robin Cohen (16 papers)
  7. Hadi Otrok (23 papers)
  8. Azzam Mourad (20 papers)
Citations (15)