Introducing Adaptive Continuous Adversarial Training (ACAT) to Enhance ML Robustness
Abstract: Adversarial training enhances the robustness of Machine Learning (ML) models against adversarial attacks. However, obtaining labeled training and adversarial training data in network/cybersecurity domains is challenging and costly. Therefore, this letter introduces Adaptive Continuous Adversarial Training (ACAT), a method that integrates adversarial training samples into the model during continuous learning sessions using real-world detected adversarial data. Experimental results with a SPAM detection dataset demonstrate that ACAT reduces the time required for adversarial sample detection compared to traditional processes. Moreover, the accuracy of the under-attack ML-based SPAM filter increased from 69% to over 88% after just three retraining sessions.
- Iqbal H Sarker “Machine learning: Algorithms, real-world applications and research directions” In SN Computer Science 2.3 Springer, 2021, pp. 1–21
- Giovanni Apruzzese, Pavel Laskov and Aliya Tastemirova “SoK: The Impact of Unlabelled Data in Cyberthreat Detection” In arXiv preprint arXiv:2205.08944, 2022
- Rana Abou Khamis and Ashraf Matrawy “Evaluation of adversarial training on different types of neural networks in deep learning-based idss” In 2020 international symposium on networks, computers and communications (ISNCC), 2020, pp. 1–6 IEEE
- Rana Abou Khamis, M Omair Shafiq and Ashraf Matrawy “Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization” In ICC 2020-2020 IEEE International Conference on Communications (ICC), 2020, pp. 1–7 IEEE
- “Learning under concept drift: A review” In IEEE Transactions on Knowledge and Data Engineering 31.12 IEEE, 2018, pp. 2346–2363
- Pedro Casas, Pavol Mulinka and Juan Vanerio “Should i (re) learn or should i go (on)? stream machine learning for adaptive defense against network attacks” In Proceedings of the 6th ACM Workshop on Moving Target Defense, 2019, pp. 79–88
- “Overcoming catastrophic forgetting in neural networks” In Proceedings of the national academy of sciences 114.13 National Acad Sciences, 2017, pp. 3521–3526
- “Feature autoencoder for detecting adversarial examples” In International Journal of Intelligent Systems 37.10 Wiley Online Library, 2022, pp. 7459–7477
- “Intriguing properties of neural networks” In arXiv preprint arXiv:1312.6199, 2013
- Ian J Goodfellow, Jonathon Shlens and Christian Szegedy “Explaining and harnessing adversarial examples” In arXiv preprint arXiv:1412.6572, 2014
- “Learning from time-changing data with adaptive windowing” In Proc. of the SIAM international conference on data mining, 2007, pp. 443–448 SIAM
- “Long short-term memory” In Neural computation 9.8 MIT press, 1997, pp. 1735–1780
- “Detecting adversarial samples from artifacts” In arXiv preprint arXiv:1703.00410, 2017
- “Machine learning techniques for spam detection in email and IoT platforms: Analysis and research challenges” In Security and Communication Networks Hindawi Limited, 2022, pp. 1–19
- “Machine learning for email spam filtering: review, approaches and open research problems” In Heliyon 5.6 Elsevier, 2019
- Tushaar Gangavarapu, CD Jaidhar and Bhabesh Chanduka “Applicability of machine learning in spam and phishing email filtering: review and approaches” In Artificial Intelligence Review 53 Springer, 2020, pp. 5019–5081
- “Position:“Real Attackers Don’t Compute Gradients”: Bridging the Gap Between Adversarial ML Research and Practice” In IEEE Conference on Secure and Trustworthy Machine Learning, 2022 IEEE
- Mohamed el Shehaby and Ashraf Matrawy “Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning” In arXiv preprint arXiv:2306.05494, 2023
- Mohamed El Shehaby and Ashraf Matrawy “The Impact of Dynamic Learning on Adversarial Attacks in Networks (IEEE CNS 23 Poster)” In 2023 IEEE Conference on Communications and Network Security (CNS), 2023, pp. 1–2 IEEE
- “The Threat of Adversarial Attacks on Machine Learning in Network Security–A Survey” In arXiv preprint arXiv:1911.02621, 2019
- “Adversarial Spam Generation Using Adaptive Gradient-Based Word Embedding Perturbations” In 2023 IEEE International Conference on Artificial Intelligence, Blockchain, and Internet of Things (AIBThings), 2023, pp. 1–5 IEEE
- “Adversarial Email Generation against Spam Detection Models through Feature Perturbation” In 2022 IEEE International Conference on Assured Autonomy (ICAA), 2022, pp. 83–92 IEEE
- “Crafting adversarial email content against machine learning based spam email detection” In Proceedings of the 2021 International Symposium on Advanced Security on Software and Systems, 2021, pp. 23–28
- “A review of spam email detection: analysis of spammer strategies and the dataset shift problem” In Artificial Intelligence Review 56.2 Springer, 2023, pp. 1145–1173
- “Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp” In arXiv preprint arXiv:2005.05909, 2020
- “Is bert really robust? a strong baseline for natural language attack on text classification and entailment” In Proceedings of the AAAI conference on artificial intelligence 34.05, 2020, pp. 8018–8025
- Jeffrey Pennington, Richard Socher and Christopher D Manning “Glove: Global vectors for word representation” In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543
- Kenneth Ward Church “Word2Vec” In Natural Language Engineering 23.1 Cambridge University Press, 2017, pp. 155–162
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.