Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
140 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Twin Auto-Encoder Model for Learning Separable Representation in Cyberattack Detection (2403.15509v2)

Published 22 Mar 2024 in cs.CR, cs.AI, and cs.LG

Abstract: Representation learning (RL) methods for cyberattack detection face the diversity and sophistication of attack data, leading to the issue of mixed representations of different classes, particularly as the number of classes increases. To address this, the paper proposes a novel deep learning architecture/model called the Twin Auto-Encoder (TAE). TAE first maps the input data into latent space and then deterministically shifts data samples of different classes further apart to create separable data representations, referred to as representation targets. TAE's decoder then projects the input data into these representation targets. After training, TAE's decoder extracts data representations. TAE's representation target serves as a novel dynamic codeword, which refers to the vector that represents a specific class. This vector is updated after each training epoch for every data sample, in contrast to the conventional fixed codeword that does not incorporate information from the input data. We conduct extensive experiments on diverse cybersecurity datasets, including seven IoT botnet datasets, two network IDS datasets, three malware datasets, one cloud DDoS dataset, and ten artificial datasets as the number of classes increases. TAE boosts accuracy and F-score in attack detection by around 2% compared to state-of-the-art models, achieving up to 96.1% average accuracy in IoT attack detection. Additionally, TAE is well-suited for cybersecurity applications and potentially for IoT systems, with a model size of approximately 1 MB and an average running time of around 2.6E-07 seconds for extracting a data sample.

Summary

We haven't generated a summary for this paper yet.