GPU-Accelerated Interpretable Generalization for Rapid Cyberattack Detection and Forensics (2507.14222v1)
Abstract: The Interpretable Generalization (IG) mechanism recently published in IEEE Transactions on Information Forensics and Security delivers state-of-the-art, evidence-based intrusion detection by discovering coherent normal and attack patterns through exhaustive intersect-and-subset operations-yet its cubic-time complexity and large intermediate bitsets render full-scale datasets impractical on CPUs. We present IG-GPU, a PyTorch re-architecture that offloads all pairwise intersections and subset evaluations to commodity GPUs. Implemented on a single NVIDIA RTX 4070 Ti, in the 15k-record NSL-KDD dataset, IG-GPU shows a 116-fold speed-up over the multi-core CPU implementation of IG. In the full size of NSL-KDD (148k-record), given small training data (e.g., 10%-90% train-test split), IG-GPU runs in 18 minutes with Recall 0.957, Precision 0.973, and AUC 0.961, whereas IG required down-sampling to 15k-records to avoid memory exhaustion and obtained Recall 0.935, Precision 0.942, and AUC 0.940. The results confirm that IG-GPU is robust across scales and could provide millisecond-level per-flow inference once patterns are learned. IG-GPU thus bridges the gap between rigorous interpretability and real-time cyber-defense, offering a portable foundation for future work on hardware-aware scheduling, multi-GPU sharding, and dataset-specific sparsity optimizations.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.