Papers
Topics
Authors
Recent
Search
2000 character limit reached

Introducing Adaptive Continuous Adversarial Training (ACAT) to Enhance ML Robustness

Published 15 Mar 2024 in cs.LG, cs.CR, and cs.NI | (2403.10461v2)

Abstract: Adversarial training enhances the robustness of Machine Learning (ML) models against adversarial attacks. However, obtaining labeled training and adversarial training data in network/cybersecurity domains is challenging and costly. Therefore, this letter introduces Adaptive Continuous Adversarial Training (ACAT), a method that integrates adversarial training samples into the model during continuous learning sessions using real-world detected adversarial data. Experimental results with a SPAM detection dataset demonstrate that ACAT reduces the time required for adversarial sample detection compared to traditional processes. Moreover, the accuracy of the under-attack ML-based SPAM filter increased from 69% to over 88% after just three retraining sessions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Iqbal H Sarker “Machine learning: Algorithms, real-world applications and research directions” In SN Computer Science 2.3 Springer, 2021, pp. 1–21
  2. Giovanni Apruzzese, Pavel Laskov and Aliya Tastemirova “SoK: The Impact of Unlabelled Data in Cyberthreat Detection” In arXiv preprint arXiv:2205.08944, 2022
  3. Rana Abou Khamis and Ashraf Matrawy “Evaluation of adversarial training on different types of neural networks in deep learning-based idss” In 2020 international symposium on networks, computers and communications (ISNCC), 2020, pp. 1–6 IEEE
  4. Rana Abou Khamis, M Omair Shafiq and Ashraf Matrawy “Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization” In ICC 2020-2020 IEEE International Conference on Communications (ICC), 2020, pp. 1–7 IEEE
  5. “Learning under concept drift: A review” In IEEE Transactions on Knowledge and Data Engineering 31.12 IEEE, 2018, pp. 2346–2363
  6. Pedro Casas, Pavol Mulinka and Juan Vanerio “Should i (re) learn or should i go (on)? stream machine learning for adaptive defense against network attacks” In Proceedings of the 6th ACM Workshop on Moving Target Defense, 2019, pp. 79–88
  7. “Overcoming catastrophic forgetting in neural networks” In Proceedings of the national academy of sciences 114.13 National Acad Sciences, 2017, pp. 3521–3526
  8. “Feature autoencoder for detecting adversarial examples” In International Journal of Intelligent Systems 37.10 Wiley Online Library, 2022, pp. 7459–7477
  9. “Intriguing properties of neural networks” In arXiv preprint arXiv:1312.6199, 2013
  10. Ian J Goodfellow, Jonathon Shlens and Christian Szegedy “Explaining and harnessing adversarial examples” In arXiv preprint arXiv:1412.6572, 2014
  11. “Learning from time-changing data with adaptive windowing” In Proc. of the SIAM international conference on data mining, 2007, pp. 443–448 SIAM
  12. “Long short-term memory” In Neural computation 9.8 MIT press, 1997, pp. 1735–1780
  13. “Detecting adversarial samples from artifacts” In arXiv preprint arXiv:1703.00410, 2017
  14. “Machine learning techniques for spam detection in email and IoT platforms: Analysis and research challenges” In Security and Communication Networks Hindawi Limited, 2022, pp. 1–19
  15. “Machine learning for email spam filtering: review, approaches and open research problems” In Heliyon 5.6 Elsevier, 2019
  16. Tushaar Gangavarapu, CD Jaidhar and Bhabesh Chanduka “Applicability of machine learning in spam and phishing email filtering: review and approaches” In Artificial Intelligence Review 53 Springer, 2020, pp. 5019–5081
  17. “Position:“Real Attackers Don’t Compute Gradients”: Bridging the Gap Between Adversarial ML Research and Practice” In IEEE Conference on Secure and Trustworthy Machine Learning, 2022 IEEE
  18. Mohamed el Shehaby and Ashraf Matrawy “Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning” In arXiv preprint arXiv:2306.05494, 2023
  19. Mohamed El Shehaby and Ashraf Matrawy “The Impact of Dynamic Learning on Adversarial Attacks in Networks (IEEE CNS 23 Poster)” In 2023 IEEE Conference on Communications and Network Security (CNS), 2023, pp. 1–2 IEEE
  20. “The Threat of Adversarial Attacks on Machine Learning in Network Security–A Survey” In arXiv preprint arXiv:1911.02621, 2019
  21. “Adversarial Spam Generation Using Adaptive Gradient-Based Word Embedding Perturbations” In 2023 IEEE International Conference on Artificial Intelligence, Blockchain, and Internet of Things (AIBThings), 2023, pp. 1–5 IEEE
  22. “Adversarial Email Generation against Spam Detection Models through Feature Perturbation” In 2022 IEEE International Conference on Assured Autonomy (ICAA), 2022, pp. 83–92 IEEE
  23. “Crafting adversarial email content against machine learning based spam email detection” In Proceedings of the 2021 International Symposium on Advanced Security on Software and Systems, 2021, pp. 23–28
  24. “A review of spam email detection: analysis of spammer strategies and the dataset shift problem” In Artificial Intelligence Review 56.2 Springer, 2023, pp. 1145–1173
  25. “Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp” In arXiv preprint arXiv:2005.05909, 2020
  26. “Is bert really robust? a strong baseline for natural language attack on text classification and entailment” In Proceedings of the AAAI conference on artificial intelligence 34.05, 2020, pp. 8018–8025
  27. Jeffrey Pennington, Richard Socher and Christopher D Manning “Glove: Global vectors for word representation” In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543
  28. Kenneth Ward Church “Word2Vec” In Natural Language Engineering 23.1 Cambridge University Press, 2017, pp. 155–162

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.