Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Impacts and Risk of Generative AI Technology on Cyber Defense (2306.13033v1)

Published 22 Jun 2023 in cs.CR and cs.AI

Abstract: Generative Artificial Intelligence (GenAI) has emerged as a powerful technology capable of autonomously producing highly realistic content in various domains, such as text, images, audio, and videos. With its potential for positive applications in creative arts, content generation, virtual assistants, and data synthesis, GenAI has garnered significant attention and adoption. However, the increasing adoption of GenAI raises concerns about its potential misuse for crafting convincing phishing emails, generating disinformation through deepfake videos, and spreading misinformation via authentic-looking social media posts, posing a new set of challenges and risks in the realm of cybersecurity. To combat the threats posed by GenAI, we propose leveraging the Cyber Kill Chain (CKC) to understand the lifecycle of cyberattacks, as a foundational model for cyber defense. This paper aims to provide a comprehensive analysis of the risk areas introduced by the offensive use of GenAI techniques in each phase of the CKC framework. We also analyze the strategies employed by threat actors and examine their utilization throughout different phases of the CKC, highlighting the implications for cyber defense. Additionally, we propose GenAI-enabled defense strategies that are both attack-aware and adaptive. These strategies encompass various techniques such as detection, deception, and adversarial training, among others, aiming to effectively mitigate the risks posed by GenAI-induced cyber threats.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (109)
  1. Introducing ChatGPT. https://openai.com/blog/chatgpt, 2023. Accessed: 2023-06-01.
  2. GoogleBrad. https://bard.google.com/, 2023. Accessed: 2023-06-12.
  3. Bing Chat. https://www.microsoft.com/en-us/edge/features/bing-chat?form=MT00D8, 2023. Accessed: 2023-05-21.
  4. Guiding the release of safer e2e conversational ai through value sensitive design. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, 2022.
  5. AI-generated disinformation poses threat of misleading voters in 2024 election. https://www.pbs.org/newshour/politics/ai-generated-disinformation-poses-threat-of-misleading-voters-in-2024-election, 2023. Accessed: 2023-06-10.
  6. G Marcus. A skeptical take on the ai revolution, 2023.
  7. Press Release. https://ir.darktrace.com/press-releases/2023/4/3/8b2d6ba25d9d54a1895956a985fe4a7d08d9f42607a112fb17964e4b57fad7d6, 2023. Accessed: 2023-06-21.
  8. Deepfake presidents used in Russia-Ukraine war. https://www.bbc.com/news/technology-60780142, 2023. Accessed: 2023-06-21.
  9. Don’t Believe Your Lying Eyes in the AI Era. https://www.bloomberg.com/opinion/articles/2023-05-23/viral-fake-pentagon-explosion-photo-on-twitter-is-an-ai-wake-up-call#xj4y7vzkg, 2023. Accessed: 2023-06-21.
  10. Intelligence-driven computer network defense informed by analysis of adversary campaigns and intrusion kill chains. Leading Issues in Information Warfare & Security Research, 1(1):80, 2011.
  11. Offensive ai: Unification of email generation through gpt-2 model with a game-theoretic approach for spear-phishing attacks. 2021.
  12. Automated email generation for targeted attacks using natural language. arXiv preprint arXiv:1908.06893, 2019.
  13. Bypassing detection of url-based phishing attacks using generative adversarial deep neural networks. In Proceedings of the sixth international workshop on security and privacy analytics, pages 53–60, 2020.
  14. Bringing a gan to a knife-fight: Adapting malware communication to avoid detection. In 2018 IEEE Security and Privacy Workshops (SPW), pages 70–75. IEEE, 2018.
  15. Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016.
  16. A comprehensive survey and analysis of generative models in machine learning. Computer Science Review, 38:100285, 2020.
  17. Does the wake-sleep algorithm produce good density estimators? Advances in neural information processing systems, 8, 1995.
  18. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
  19. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  20. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  21. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821–8831. PMLR, 2021.
  22. Variational diffusion models. Advances in neural information processing systems, 34:21696–21707, 2021.
  23. Midjourney Documentation. https://docs.midjourney.com/, 2023. Accessed: 2023-06-8.
  24. Phenaki: Variable length video generation from open domain textual description. arXiv preprint arXiv:2210.02399, 2022.
  25. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022.
  26. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
  27. Audiolm: a language modeling approach to audio generation. arXiv preprint arXiv:2209.03143, 2022.
  28. Robust speech recognition via large-scale weak supervision. arXiv preprint arXiv:2212.04356, 2022.
  29. Survivalism: Systematic analysis of windows malware living-off-the-land. In 2021 IEEE Symposium on Security and Privacy (SP), pages 1557–1574. IEEE, 2021.
  30. Dissecting One of APT29’s Fileless WMI and PowerShell Backdoors (POSHSPY). https://www.mandiant.com/resources/blog/dissecting-one-ofap, 2023. Accessed: 2023-06-13.
  31. WHAT IS WANNACRY/WANACRYPT0R? https://www.cisa.gov/sites/default/files/FactSheets/NCCIC%20ICS_FactSheet_WannaCry_Ransomware_S508C.pdf, 2023. Accessed: 2023-06-5.
  32. Idsgan: Generative adversarial networks for attack generation against intrusion detection. In Advances in Knowledge Discovery and Data Mining: 26th Pacific-Asia Conference, PAKDD 2022, Chengdu, China, May 16–19, 2022, Proceedings, Part III, pages 79–91. Springer, 2022.
  33. Spacephish: The evasion-space of adversarial attacks against phishing website detectors using machine learning. In Proceedings of the 38th Annual Computer Security Applications Conference, pages 171–185, 2022.
  34. Generating adversarial malware examples for black-box attacks based on gan. In Data Mining and Big Data: 7th International Conference, DMBD 2022, Beijing, China, November 21–24, 2022, Proceedings, Part II, pages 409–423. Springer, 2023.
  35. Cybersecurity Infrastructure & Security Agency, petya ransomware. https://www.cisa.gov/news-events/alerts/2017/07/01/petya-ransomware, 2023. Accessed: 2023-05-28.
  36. Cybersecurity Infrastructure & Security Agency, crashoverride malware. https://www.cisa.gov/news-events/alerts/2017/06/12/crashoverride-malware, 2023. Accessed: 2023-06-12.
  37. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  38. https://www.darkreading.com/attacks-breaches/sidewinder-strikes-victims-pakistan-turkey-multiphase-polymorphic-attack, 2023. Accessed: 2023-06-12.
  39. Ursnif. https://attack.mitre.org/software/S0386/, 2023. Accessed: 2023-06-12.
  40. Cybersecurity Infrastructure & Security Agency aaeh. https://www.cisa.gov/news-events/alerts/2015/04/09/aaeh, 2023. Accessed: 2023-06-12.
  41. Yaser Alosefer. Analysing web-based malware behaviour through client honeypots. PhD thesis, Cardiff University, 2012.
  42. Polymorphic adversarial ddos attack on ids using gan. In 2020 International Symposium on Networks, Computers and Communications (ISNCC), pages 1–6. IEEE, 2020.
  43. Polymorphic adversarial cyberattacks using wgan. Journal of Cybersecurity and Privacy, 1(4):767–792, 2021.
  44. Neil C Rowe. Deception in defense of computer systems from cyber attack. In Cyber Warfare and Cyber Terrorism, pages 97–104. IGI global, 2007.
  45. Emotet. https://attack.mitre.org/versions/v7/software/S0367/, 2023. Accessed: 2023-06-12.
  46. Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD explorations newsletter, 21(2):80–90, 2019.
  47. Twitter. https://twitter.com/, 2023. Accessed: 2023-06-12.
  48. Facebook. https://www.facebook.com/, 2023. Accessed: 2023-06-12.
  49. Reddit. https://www.reddit.com/, 2023. Accessed: 2023-06-12.
  50. KP Kumar and G Geethakumari. Detecting misinformation in online social networks using cognitive psychology. Human-centric Computing and Information Sciences, 4(1):1–22, 2014.
  51. Steve Forbes. Web of deception: Misinformation on the Internet. Information Today, Inc., 2002.
  52. Generating fake cyber threat intelligence using transformer-based models. In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1–9. IEEE, 2021.
  53. Preventing poisoning attacks on ai based threat intelligence systems. In 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP), pages 1–6. IEEE, 2019.
  54. Ai for security and security for ai. In Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy, pages 333–334, 2021.
  55. The case for latent variable vs deep learning methods in misinformation detection: an application to covid-19. In Discovery Science: 24th International Conference, DS 2021, Halifax, NS, Canada, October 11–13, 2021, Proceedings 24, pages 422–432. Springer, 2021.
  56. Deepswap. https://www.deepswap.ai/, 2023. Accessed: 2023-06-12.
  57. Woombo. https://www.wombo.ai/, 2023. Accessed: 2023-06-12.
  58. Instagram-DeepFake-Bot. https://github.com/dome272/Instagram-DeepFake-Bot, 2023. Accessed: 2023-06-12.
  59. Sun Tzu. The art of war. Standard Ebooks, 2020.
  60. Deepdga: Adversarially-tuned domain generation and detection. In Proceedings of the 2016 ACM workshop on artificial intelligence and security, pages 13–21, 2016.
  61. Software vulnerability analysis and discovery using machine-learning and data-mining techniques: A survey. ACM Computing Surveys (CSUR), 50(4):1–36, 2017.
  62. A gan-based method for generating sql injection attack samples. In 2022 IEEE 10th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), volume 10, pages 1827–1833. IEEE, 2022.
  63. Adversarial examples detection for xss attacks based on generative adversarial networks. IEEE Access, 8:10989–10996, 2020.
  64. The threat of offensive ai to organizations. Computers & Security, page 103006, 2022.
  65. T-miner: A generative approach to defend against trojan attacks on dnn-based text classification. arXiv preprint arXiv:2103.04264, 2021.
  66. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.
  67. Trojaning attack on neural networks. 2017.
  68. Steganogan: High capacity image steganography with gans. arXiv preprint arXiv:1901.03892, 2019.
  69. Stegonet: Turn deep neural network into a stegomalware. In Annual Computer Security Applications Conference, pages 928–938, 2020.
  70. Evilmodel 2.0: bringing neural network models into malware attacks. Computers & Security, 120:102807, 2022.
  71. Deepc2: Ai-powered covert command and control on osns. In Information and Communications Security: 24th International Conference, ICICS 2022, Canterbury, UK, September 5–8, 2022, Proceedings, pages 394–414. Springer, 2022.
  72. Flow-based detection and proxy-based evasion of encrypted malware c2 traffic. In Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security, pages 83–91, 2020.
  73. Abnormal user behavior generation based on dcgan in zero trust network. Procedia Computer Science, 214:1500–1505, 2022.
  74. Random host mutation for moving target defense. In Security and Privacy in Communication Networks: 8th International ICST Conference, SecureComm 2012, Padua, Italy, September 3-5, 2012. Revised Selected Papers 8, pages 310–327. Springer, 2013.
  75. Tadgan: Time series anomaly detection using generative adversarial networks. In 2020 IEEE International Conference on Big Data (Big Data), pages 33–43. IEEE, 2020.
  76. Tanogan: Time series anomaly detection with generative adversarial networks. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pages 1778–1785. IEEE, 2020.
  77. Mad-gan: Multivariate anomaly detection for time series data with generative adversarial networks. In Artificial Neural Networks and Machine Learning–ICANN 2019: Text and Time Series: 28th International Conference on Artificial Neural Networks, Munich, Germany, September 17–19, 2019, Proceedings, Part IV, pages 703–716. Springer, 2019.
  78. Improved phishing detection algorithms using adversarial autoencoder synthesized data. In 2020 ieee 45th conference on local computer networks (lcn), pages 24–32. IEEE, 2020.
  79. Leveraging synthetic data and pu learning for phishing email detection. In Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy, pages 29–40, 2022.
  80. Droiddetector: android malware characterization and detection using deep learning. Tsinghua Science and Technology, 21(1):114–123, 2016.
  81. Android malware detection through generative adversarial networks. Transactions on Emerging Telecommunications Technologies, 33(2):e3675, 2022.
  82. Tweepfake: About detecting deepfake tweets. Plos one, 16(5):e0251415, 2021.
  83. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
  84. Generating sentiment-preserving fake online reviews using neural language models and their human-and machine-based detection. In Advanced Information Networking and Applications: Proceedings of the 34th International Conference on Advanced Information Networking and Applications (AINA-2020), pages 1341–1354. Springer, 2020.
  85. Intel Introduces Real-Time Deepfake Detector. https://www.intel.com/content/www/us/en/newsroom/news/intel-introduces-real-time-deepfake-detector.html#gs.1afjyz, 2023. Accessed: 2023-06-22.
  86. Using honeynets and the diamond model for ics threat analysis. Technical report, CARNEGIE-MELLON UNIV PITTSBURGH PA PITTSBURGH United States, 2016.
  87. Cybertwitter: Using twitter to generate alerts for cybersecurity threats and vulnerabilities. In 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 860–867. IEEE, 2016.
  88. Relext: Relation extraction using deep learning approaches for cybersecurity knowledge graph improvement. In Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pages 879–886, 2019.
  89. Creating cybersecurity knowledge graphs from malware after action reports. IEEE Access, 8:211691–211703, 2020.
  90. Fox in the trap: Thwarting masqueraders via automated decoy document deployment. In Proceedings of the Eighth European Workshop on System Security, pages 1–7, 2015.
  91. Deep generative models to extend active directory graphs with honeypot users. arXiv preprint arXiv:2109.06180, 2021.
  92. Gpt-2c: A parser for honeypot logs using large pre-trained language models. In Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pages 649–653, 2021.
  93. An investigation on fragility of machine learning classifiers in android malware detection. In IEEE INFOCOM 2022-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pages 1–6. IEEE, 2022.
  94. Can machine learning model with static features be fooled: an adversarial machine learning approach. Cluster computing, 23:3233–3253, 2020.
  95. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE symposium on security and privacy (SP), pages 582–597. IEEE, 2016.
  96. Automated red teaming: a proposed framework for military application. In Proceedings of the 9th annual conference on Genetic and evolutionary computation, pages 1936–1942, 2007.
  97. Sheetal Temara. Maximizing penetration testing success with effective reconnaissance techniques using chatgpt. 2023.
  98. Explainable intrusion detection systems (x-ids): A survey of current methods, challenges, and opportunities. IEEE Access, 10:112392–112415, 2022.
  99. Creating an explainable intrusion detection system using self organizing maps. arXiv preprint arXiv:2207.07465, 2022.
  100. Thomas Michael Kirby. Pruning GHSOM to create an explainable intrusion detection system. PhD thesis, Mississippi State University, 2023.
  101. Twinexplainer: Explaining predictions of an automotive digital twin. arXiv preprint arXiv:2302.00152, 2023.
  102. Explaining phishing attacks: An xai approach to enhance user awareness and trust. In Proc. of the Italian Conference on CyberSecurity (ITASEC ‘23), 2023.
  103. A surrogate-based technique for android malware detectors’ explainability. In 2022 18th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), pages 112–117. IEEE, 2022.
  104. When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. In 2020 international joint conference on neural networks (IJCNN), pages 1–8. IEEE, 2020.
  105. Explainable intelligence-driven defense mechanism against advanced persistent threats: A joint edge game and ai approach. IEEE Transactions on Dependable and Secure Computing, 19(2):757–775, 2021.
  106. irs-partition: An intrusion response system utilizing deep q-networks and system partitions. SoftwareX, 19:101120, 2022.
  107. Using knowledge graphs and reinforcement learning for malware analysis. In 2020 IEEE International Conference on Big Data (Big Data), pages 2626–2633. IEEE, 2020.
  108. Bridging automated to autonomous cyber defense: Foundational analysis of tabular q-learning. In Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, pages 149–159, 2022.
  109. Deep reinforcement learning for cyber security. IEEE Transactions on Neural Networks and Learning Systems, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Subash Neupane (17 papers)
  2. Ivan A. Fernandez (8 papers)
  3. Sudip Mittal (66 papers)
  4. Shahram Rahimi (36 papers)
Citations (11)