Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Creating an Explainable Intrusion Detection System Using Self Organizing Maps (2207.07465v1)

Published 15 Jul 2022 in cs.CR, cs.AI, and cs.LG

Abstract: Modern AI enabled Intrusion Detection Systems (IDS) are complex black boxes. This means that a security analyst will have little to no explanation or clarification on why an IDS model made a particular prediction. A potential solution to this problem is to research and develop Explainable Intrusion Detection Systems (X-IDS) based on current capabilities in Explainable Artificial Intelligence (XAI). In this paper, we create a Self Organizing Maps (SOMs) based X-IDS system that is capable of producing explanatory visualizations. We leverage SOM's explainability to create both global and local explanations. An analyst can use global explanations to get a general idea of how a particular IDS model computes predictions. Local explanations are generated for individual datapoints to explain why a certain prediction value was computed. Furthermore, our SOM based X-IDS was evaluated on both explanation generation and traditional accuracy tests using the NSL-KDD and the CIC-IDS-2017 datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jesse Ables (5 papers)
  2. Thomas Kirby (2 papers)
  3. William Anderson (19 papers)
  4. Sudip Mittal (66 papers)
  5. Shahram Rahimi (36 papers)
  6. Ioana Banicescu (8 papers)
  7. Maria Seale (12 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.