Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CARE: Ensemble Adversarial Robustness Evaluation Against Adaptive Attackers for Security Applications (2401.11126v1)

Published 20 Jan 2024 in cs.CR and cs.LG

Abstract: Ensemble defenses, are widely employed in various security-related applications to enhance model performance and robustness. The widespread adoption of these techniques also raises many questions: Are general ensembles defenses guaranteed to be more robust than individuals? Will stronger adaptive attacks defeat existing ensemble defense strategies as the cybersecurity arms race progresses? Can ensemble defenses achieve adversarial robustness to different types of attacks simultaneously and resist the continually adjusted adaptive attacks? Unfortunately, these critical questions remain unresolved as there are no platforms for comprehensive evaluation of ensemble adversarial attacks and defenses in the cybersecurity domain. In this paper, we propose a general Cybersecurity Adversarial Robustness Evaluation (CARE) platform aiming to bridge this gap.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (66)
  1. Y. Mirsky, T. Doitshman, Y. Elovici, and A. Shabtai, “Kitsune: an ensemble of autoencoders for online network intrusion detection,” arXiv preprint arXiv:1802.09089, 2018.
  2. P.-F. Marteau, “Random partitioning forest for point-wise and collective anomaly detection—application to network intrusion detection,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 2157–2172, 2021.
  3. W. Chen, Z. Wang, Y. Zhong, D. Han, C. Duan, X. Yin, J. Yang, and X. Shi, “Adsim: network anomaly detection via similarity-aware heterogeneous ensemble learning,” in 2021 IFIP/IEEE International Symposium on Integrated Network Management (IM).   IEEE, 2021, pp. 608–612.
  4. D. Vasan, M. Alazab, S. Wassan, B. Safaei, and Q. Zheng, “Image-based malware classification using ensemble of cnn architectures (imcec),” Computers & Security, vol. 92, p. 101748, 2020.
  5. D. Rabadi and S. G. Teo, “Advanced windows methods on malware detection and classification,” in Annual Computer Security Applications Conference, 2020, pp. 54–68.
  6. H. Kwon, M. B. Baig, and L. Akoglu, “A domain-agnostic approach to spam-url detection via redirects,” in Pacific-Asia Conference on Knowledge Discovery and Data Mining.   Springer, 2017, pp. 220–232.
  7. D. Han, Z. Wang, Y. Zhong, W. Chen, J. Yang, S. Lu, X. Shi, and X. Yin, “Evaluating and improving adversarial robustness of machine learning-based network intrusion detectors,” IEEE Journal on Selected Areas in Communications, 2021.
  8. M. Nasr, A. Bahramali, and A. Houmansadr, “Defeating dnn-based traffic analysis systems in real-time with blind adversarial perturbations,” in 30th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 21), 2021.
  9. A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry, “Adversarial examples are not bugs, they are features,” arXiv preprint arXiv:1905.02175, 2019.
  10. B. Biggio and F. Roli, “Wild patterns: Ten years after the rise of adversarial machine learning,” Pattern Recognition, vol. 84, pp. 317–331, 2018.
  11. I. Rosenberg, A. Shabtai, Y. Elovici, and L. Rokach, “Adversarial machine learning attacks and defense methods in the cyber security domain,” ACM Computing Surveys (CSUR), vol. 54, no. 5, pp. 1–36, 2021.
  12. G. Apruzzese, M. Andreolini, L. Ferretti, M. Marchetti, and M. Colajanni, “Modeling realistic adversarial attacks against network intrusion detection systems,” arXiv preprint arXiv:2106.09380, 2021.
  13. X. Ling, L. Wu, J. Zhang, Z. Qu, W. Deng, X. Chen, C. Wu, S. Ji, T. Luo, J. Wu et al., “Adversarial attacks against windows pe malware detection: A survey of the state-of-the-art,” arXiv preprint arXiv:2112.12310, 2021.
  14. X. Ling, S. Ji, J. Zou, J. Wang, C. Wu, B. Li, and T. Wang, “Deepsec: A uniform platform for security analysis of deep learning model,” in 2019 IEEE Symposium on Security and Privacy (SP).   IEEE, 2019, pp. 673–690.
  15. N. Papernot, I. Goodfellow, R. Sheatsley, R. Feinman, P. McDaniel et al., “cleverhans v2. 0.0: an adversarial machine learning library,” arXiv preprint arXiv:1610.00768, vol. 10, 2016.
  16. J. Rauber, W. Brendel, and M. Bethge, “Foolbox v0. 8.0: A python toolbox to benchmark the robustness of machine learning models,” arXiv preprint arXiv:1707.04131, vol. 5, 2017.
  17. C. Rong, G. Gou, M. Cui, G. Xiong, Z. Li, and L. Guo, “Malfinder: An ensemble learning-based framework for malicious traffic detection,” in 2020 IEEE Symposium on Computers and Communications (ISCC).   IEEE, 2020, pp. 7–7.
  18. Z. Zhang, P. Qi, and W. Wang, “Dynamic malware analysis with feature engineering and feature learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 01, 2020, pp. 1210–1217.
  19. B. Biggio, G. Fumera, and F. Roli, “Multiple classifier systems for robust classifier design in adversarial environments,” International Journal of Machine Learning and Cybernetics, vol. 1, no. 1-4, pp. 27–41, 2010.
  20. C. Smutz and A. Stavrou, “When a tree falls: Using diversity in ensemble classifiers to identify evasion in malware detectors.” in NDSS, 2016.
  21. F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” arXiv preprint arXiv:1705.07204, 2017.
  22. F. Tramer and D. Boneh, “Adversarial training and robustness for multiple perturbations,” arXiv preprint arXiv:1904.13000, 2019.
  23. F. Zhang, Y. Wang, and H. Wang, “Gradient correlation: Are ensemble classifiers more robust against evasion attacks in practical settings?” in International Conference on Web Information Systems Engineering.   Springer, 2018, pp. 96–110.
  24. F. Zhang, Y. Wang, S. Liu, and H. Wang, “Decision-based evasion attacks on tree ensemble classifiers,” World Wide Web, vol. 23, no. 5, pp. 2957–2977, 2020.
  25. S. Kariyappa and M. K. Qureshi, “Improving adversarial robustness of ensembles with diversity training,” arXiv preprint arXiv:1901.09981, 2019.
  26. R. Shu, T. Xia, L. Williams, and T. Menzies, “Omni: automated ensemble with unexpected models against adversarial evasion attack,” Empirical Software Engineering, vol. 27, no. 1, pp. 1–32, 2022.
  27. D. Li and Q. Li, “Adversarial deep ensemble: Evasion attacks and defenses for malware detection,” IEEE Transactions on Information Forensics and Security, vol. 15, pp. 3886–3900, 2020.
  28. Y. Chen, S. Wang, W. Jiang, A. Cidon, and S. Jana, “Cost-aware robust tree ensembles for security applications,” in 30th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 21), 2021.
  29. P. Maini, E. Wong, and Z. Kolter, “Adversarial robustness against the union of multiple perturbation models,” in International Conference on Machine Learning.   PMLR, 2020, pp. 6640–6650.
  30. D. Li, Q. Li, Y. Ye, and S. Xu, “Arms race in adversarial malware detection: A survey,” ACM Computing Surveys (CSUR), vol. 55, no. 1, pp. 1–35, 2021.
  31. W. He, J. Wei, X. Chen, N. Carlini, and D. Song, “Adversarial example defense: Ensembles of weak defenses are not strong,” in 11th {normal-{\{{USENIX}normal-}\}} workshop on offensive technologies ({normal-{\{{WOOT}normal-}\}} 17), 2017.
  32. Y. Dou, G. Ma, P. S. Yu, and S. Xie, “Robust spammer detection by nash reinforcement learning,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 924–933.
  33. F. Tramer, N. Carlini, W. Brendel, and A. Madry, “On adaptive attacks to adversarial example defenses,” Advances in Neural Information Processing Systems, vol. 33, pp. 1633–1645, 2020.
  34. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
  35. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
  36. I. Sharafaldin, A. H. Lashkari, and A. A. Ghorbani, “Toward generating a new intrusion detection dataset and intrusion traffic characterization.” ICISSp, vol. 1, pp. 108–116, 2018.
  37. L. Yang, A. Ciptadi, I. Laziuk, A. Ahmadzadeh, and G. Wang, “Bodmas: An open dataset for learning based temporal analysis of pe malware,” in 2021 IEEE Security and Privacy Workshops (SPW).   IEEE, 2021, pp. 78–84.
  38. A. H. Lashkari, A. F. A. Kadir, L. Taheri, and A. A. Ghorbani, “Toward developing a systematic approach to generate benchmark android malware datasets and classification,” in 2018 International Carnahan Conference on Security Technology (ICCST).   IEEE, 2018, pp. 1–7.
  39. IBM, “Adversarial robustness toolbox: https://github.com/ibm/adversarial-robustness-toolbox,” 2018.
  40. L. Demetrio and B. Biggio, “secml-malware: A python library for adversarial robustness evaluation of windows malware classifiers,” arXiv e-prints, pp. arXiv–2104, 2021.
  41. J. Guo, Y. Sang, P. Chang, X. Xu, and Y. Zhang, “Mgel: A robust malware encrypted traffic detection method based on ensemble learning with multi-grained features,” in International Conference on Computational Science.   Springer, 2021, pp. 195–208.
  42. M. Rigaki, “Adversarial deep learning against intrusion detection classifiers,” 2017.
  43. M. J. Hashemi, G. Cusack, and E. Keller, “Towards evaluation of nidss in adversarial setting,” in Proceedings of the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication Networks, 2019, pp. 14–21.
  44. L. Nataraj, S. Karthikeyan, G. Jacob, and B. S. Manjunath, “Malware images: visualization and automatic classification,” in Proceedings of the 8th international symposium on visualization for cyber security, 2011, pp. 1–7.
  45. F. Kreuk, A. Barak, S. Aviv-Reuven, M. Baruch, B. Pinkas, and J. Keshet, “Deceiving end-to-end deep learning malware detectors using adversarial examples,” arXiv preprint arXiv:1802.04528, 2018.
  46. O. Suciu, S. E. Coull, and J. Johns, “Exploring adversarial examples in malware detection,” in 2019 IEEE Security and Privacy Workshops (SPW).   IEEE, 2019, pp. 8–14.
  47. L. Demetrio, S. E. Coull, B. Biggio, G. Lagorio, A. Armando, and F. Roli, “Adversarial exemples: A survey and experimental evaluation of practical attacks on machine learning for windows malware detection,” arXiv preprint arXiv:2008.07125, 2020.
  48. A. Al-Dujaili, A. Huang, E. Hemberg, and U.-M. O’Reilly, “Adversarial deep learning for robust detection of binary encoded malware,” in 2018 IEEE Security and Privacy Workshops (SPW).   IEEE, 2018, pp. 76–82.
  49. T. Dietterich et al., “Ensemble learning. the handbook of brain theory and neural networks,” Arbib MA, 2002.
  50. B. Biggio, G. Fumera, and F. Roli, “Multiple classifier systems for adversarial classification tasks,” in International Workshop on Multiple Classifier Systems.   Springer, 2009, pp. 132–141.
  51. B. Biggio, I. Corona, Z.-M. He, P. P. Chan, G. Giacinto, D. S. Yeung, and F. Roli, “One-and-a-half-class multiple classifier systems for secure learning against evasion attacks at test time,” in International Workshop on Multiple Classifier Systems.   Springer, 2015, pp. 168–180.
  52. M. Abbasi and C. Gagné, “Robustness to adversarial examples through an ensemble of specialists,” arXiv preprint arXiv:1702.06856, 2017.
  53. W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” arXiv preprint arXiv:1704.01155, 2017.
  54. J. Wang, J. Pan, I. AlQerm, and Y. Liu, “Def-ids: An ensemble defense mechanism against adversarial attacks for deep learning-based network intrusion detection,” in 2021 International Conference on Computer Communications and Networks (ICCCN).   IEEE, 2021, pp. 1–9.
  55. C. Song, K. He, L. Wang, and J. E. Hopcroft, “Improving the generalization of adversarial training with domain adaptation,” arXiv preprint arXiv:1810.00740, 2018.
  56. N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 ieee symposium on security and privacy (sp).   IEEE, 2017, pp. 39–57.
  57. N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in 2016 IEEE European symposium on security and privacy (EuroS&P).   IEEE, 2016, pp. 372–387.
  58. S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2574–2582.
  59. A. Kurakin, I. Goodfellow, S. Bengio et al., “Adversarial examples in the physical world,” 2016.
  60. P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh, “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” in Proceedings of the 10th ACM workshop on artificial intelligence and security, 2017, pp. 15–26.
  61. A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, “Black-box adversarial attacks with limited queries and information,” in International Conference on Machine Learning.   PMLR, 2018, pp. 2137–2146.
  62. S. Ghadimi and G. Lan, “Stochastic first-and zeroth-order methods for nonconvex stochastic programming,” SIAM Journal on Optimization, vol. 23, no. 4, pp. 2341–2368, 2013.
  63. X. Chen, S. Liu, K. Xu, X. Li, X. Lin, M. Hong, and D. Cox, “Zo-adamm: Zeroth-order adaptive momentum method for black-box optimization,” arXiv preprint arXiv:1910.06513, 2019.
  64. W. Brendel, J. Rauber, and M. Bethge, “Decision-based adversarial attacks: Reliable attacks against black-box machine learning models,” arXiv preprint arXiv:1712.04248, 2017.
  65. J. Chen, M. I. Jordan, and M. J. Wainwright, “Hopskipjumpattack: A query-efficient decision-based attack,” in 2020 ieee symposium on security and privacy (sp).   IEEE, 2020, pp. 1277–1294.
  66. N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger, “Gaussian process optimization in the bandit setting: No regret and experimental design,” arXiv preprint arXiv:0912.3995, 2009.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets