Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions (2405.14487v1)

Published 23 May 2024 in cs.CR

Abstract: The recent progression of LLMs has witnessed great success in the fields of data-centric applications. LLMs trained on massive textual datasets showed ability to encode not only context but also ability to provide powerful comprehension to downstream tasks. Interestingly, Generative Pre-trained Transformers utilised this ability to bring AI a step closer to human being replacement in at least datacentric applications. Such power can be leveraged to identify anomalies of cyber threats, enhance incident response, and automate routine security operations. We provide an overview for the recent activities of LLMs in cyber defence sections, as well as categorization for the cyber defence sections such as threat intelligence, vulnerability assessment, network security, privacy preserving, awareness and training, automation, and ethical guidelines. Fundamental concepts of the progression of LLMs from Transformers, Pre-trained Transformers, and GPT is presented. Next, the recent works of each section is surveyed with the related strengths and weaknesses. A special section about the challenges and directions of LLMs in cyber security is provided. Finally, possible future research directions for benefiting from LLMs in cyber security is discussed.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (147)
  1. N. Moustafa, I. A. Khan, M. Hassanin, D. Ormrod, D. Pi, I. Razzak, and J. Slay, “Dfsat: Deep federated learning for identifying cyber threats in iot-based satellite networks,” IEEE Transactions on Industrial Informatics, 2022.
  2. J. Contreras-Castillo, S. Zeadally, and J. A. Guerrero-Ibañez, “Internet of vehicles: architecture, protocols, and security,” IEEE internet of things Journal, vol. 5, no. 5, pp. 3701–3709, 2017.
  3. L. Zhu, S. Majumdar, and C. Ekenna, “An invisible warfare with the internet of battlefield things: a literature review,” Human behavior and emerging technologies, vol. 3, no. 2, pp. 255–260, 2021.
  4. R. Baheti and H. Gill, “Cyber-physical systems,” The impact of control technology, vol. 12, no. 1, pp. 161–166, 2011.
  5. P. Merle, S. Gearhart, C. Craig, M. Vandyke, M. E. Brooks, and M. Rahimi, “Computers, tablets, and smart phones: The truth about web-based surveys,” Survey Practice, vol. 8, no. 6, 2015.
  6. M. G. Juarez, V. J. Botti, and A. S. Giret, “Digital twins: Review and challenges,” Journal of Computing and Information Science in Engineering, vol. 21, no. 3, p. 030802, 2021.
  7. A. R. Sai, J. Buckley, and A. Le Gear, “Privacy and security analysis of cryptocurrency mobile applications,” in 2019 fifth conference on mobile and secure services (MobiSecServ).   IEEE, 2019, pp. 1–6.
  8. M. Farooq and M. Hassan, “Iot smart homes security challenges and solution,” International Journal of Security and Networks, vol. 16, no. 4, pp. 235–243, 2021.
  9. J. O. Oyelami and A. M. Kassim, “Cyber security defence policies: A proposed guidelines for organisations cyber security practices,” International Journal of Advanced Computer Science and Applications, vol. 11, no. 8, 2020.
  10. S. Salim, N. Moustafa, M. Hassanian, D. Ormod, and J. Slay, “Deep federated learning-based threat detection model for extreme satellite communications,” IEEE Internet of Things Journal, 2023.
  11. O. v. Sviatun, O. v. Goncharuk, C. Roman, O. Kuzmenko, and I. V. Kozych, “Combating cybercrime: economic and legal aspects,” WSEAS Transactions on Business and Economics, vol. 18, pp. 751–762, 2021.
  12. R. K. Shukla and A. K. Tiwari, “Security analysis of the cyber crime,” in The Ethical Frontier of AI and Data Analysis.   IGI Global, 2024, pp. 257–271.
  13. M. Hassanin, S. Anwar, I. Radwan, F. S. Khan, and A. Mian, “Visual attention methods in deep learning: An in-depth survey,” Information Fusion, vol. 108, p. 102417, 2024.
  14. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  15. M. Hassanin, A. Khamiss, M. Bennamoun, F. Boussaid, and I. Radwan, “Crossformer: Cross spatio-temporal transformer for 3d human pose estimation,” arXiv preprint arXiv:2203.13387, 2022.
  16. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural information processing systems, vol. 27, 2014.
  17. A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, “Generative adversarial networks: An overview,” IEEE signal processing magazine, vol. 35, no. 1, pp. 53–65, 2018.
  18. W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong et al., “A survey of large language models,” arXiv preprint arXiv:2303.18223, 2023.
  19. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  20. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  21. D. Saha, S. Tarek, K. Yahyaei, S. K. Saha, J. Zhou, M. Tehranipoor, and F. Farahmandi, “Llm for soc security: A paradigm shift,” arXiv preprint arXiv:2310.06046, 2023.
  22. Z. Dong, Z. Zhou, C. Yang, J. Shao, and Y. Qiao, “Attacks, defenses and evaluations for llm conversation safety: A survey,” arXiv preprint arXiv:2402.09283, 2024.
  23. H. Karlzen and T. Sommestad, “Automatic incident response solutions: a review of proposed solutions’ input and output,” in Proceedings of the 18th International Conference on Availability, Reliability and Security, 2023, pp. 1–9.
  24. J. Zhang, H. Bu, H. Wen, Y. Chen, L. Li, and H. Zhu, “When llms meet cybersecurity: A systematic literature review,” arXiv preprint arXiv:2405.03644, 2024.
  25. Y. Li, H. Wen, W. Wang, X. Li, Y. Yuan, G. Liu, J. Liu, W. Xu, X. Wang, Y. Sun et al., “Personal llm agents: Insights and survey about the capability, efficiency and security,” arXiv preprint arXiv:2401.05459, 2024.
  26. Y. Yao, J. Duan, K. Xu, Y. Cai, Z. Sun, and Y. Zhang, “A survey on large language model (llm) security and privacy: The good, the bad, and the ugly,” High-Confidence Computing, p. 100211, 2024.
  27. F. Wu, N. Zhang, S. Jha, P. McDaniel, and C. Xiao, “A new era in llm security: Exploring security concerns in real-world llm-based systems,” arXiv preprint arXiv:2402.18649, 2024.
  28. F. Sufi, “An innovative gpt-based open-source intelligence using historical cyber incident reports,” Natural Language Processing Journal, p. 100074, 2024.
  29. G. Siracusano, D. Sanvito, R. Gonzalez, M. Srinivasan, S. Kamatchi, W. Takahashi, M. Kawakita, T. Kakumaru, and R. Bifulco, “Time for action: Automated analysis of cyber threat intelligence in the wild,” arXiv preprint arXiv:2307.10214, 2023.
  30. S. Mitra, S. Neupane, T. Chakraborty, S. Mittal, A. Piplai, M. Gaur, and S. Rahimi, “Localintel: Generating organizational threat intelligence from global and local cyber knowledge,” arXiv preprint arXiv:2401.10036, 2024.
  31. Y. Hu, F. Zou, J. Han, X. Sun, and Y. Wang, “Llm-tikg: Threat intelligence knowledge graph construction utilizing large language model,” Available at SSRN 4671345, 2023.
  32. M. Sewak, V. Emani, and A. Naresh, “Crush: Cybersecurity research using universal llms and semantic hypernetworks,” 2023.
  33. S. Hays and D. J. White, “Using llms for tabletop exercises within the security domain,” arXiv preprint arXiv:2403.01626, 2024.
  34. M. Siavvas, “Vulnerability prediction using large language models (llms),” https://dossproject.eu/the-doss-approach-for-vulnerability-prediction-using-large-language-models-llms/, 2023, [Online; accessed 19-July-2024].
  35. K. Scarfone, M. Souppaya, A. Cody, and A. Orebaugh, “Technical guide to information security testing and assessment,” NIST Special Publication, vol. 800, no. 115, pp. 2–25, 2008.
  36. H. Eakin and A. L. Luers, “Assessing the vulnerability of social-environmental systems,” Annu. Rev. Environ. Resour., vol. 31, pp. 365–394, 2006.
  37. G. Deng, Y. Liu, V. Mayoral-Vilches, P. Liu, Y. Li, Y. Xu, T. Zhang, Y. Liu, M. Pinzger, and S. Rass, “Pentestgpt: An llm-empowered automatic penetration testing tool,” arXiv preprint arXiv:2308.06782, 2023.
  38. M. A. Ferrag, A. Battah, N. Tihanyi, M. Debbah, T. Lestable, and L. C. Cordeiro, “Securefalcon: The next cyber reasoning system for cyber security,” arXiv preprint arXiv:2307.06616, 2023.
  39. S. Temara, “Maximizing penetration testing success with effective reconnaissance techniques using chatgpt,” arXiv preprint arXiv:2307.06391, 2023.
  40. A. Happe and J. Cito, “Getting pwn’d by ai: Penetration testing with large language models,” in Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2023, pp. 2082–2086.
  41. H. Pearce, B. Tan, B. Ahmad, R. Karri, and B. Dolan-Gavitt, “Examining zero-shot vulnerability repair with large language models,” in 2023 IEEE Symposium on Security and Privacy (SP).   IEEE, 2023, pp. 2339–2356.
  42. V. Akuthota, R. Kasula, S. T. Sumona, M. Mohiuddin, M. T. Reza, and M. M. Rahman, “Vulnerability detection and monitoring using llm,” in 2023 IEEE 9th International Women in Engineering (WIE) Conference on Electrical and Computer Engineering (WIECON-ECE).   IEEE, 2023, pp. 309–314.
  43. R. Ingemann Tuffveson Jensen, V. Tawosi, and S. Alamir, “Software vulnerability and functionality assessment using llms,” arXiv e-prints, pp. arXiv–2403, 2024.
  44. N. S. Mathews, Y. Brus, Y. Aafer, M. Nagappan, and S. McIntosh, “Llbezpeky: Leveraging large language models for vulnerability detection,” arXiv preprint arXiv:2401.01269, 2024.
  45. S. Sakaoglu, “Kartal: Web application vulnerability hunting using large language models: Novel method for detecting logical vulnerabilities in web applications with finetuned large language models,” 2023.
  46. C. Chen, J. Su, J. Chen, Y. Wang, T. Bi, Y. Wang, X. Lin, T. Chen, and Z. Zheng, “When chatgpt meets smart contract vulnerability detection: How far are we?” arXiv preprint arXiv:2309.05520, 2023.
  47. A. Patel, Q. Qassim, and C. Wills, “A survey of intrusion detection and prevention systems,” Information Management & Computer Security, vol. 18, no. 4, pp. 277–290, 2010.
  48. P. Ioulianou, V. Vasilakis, I. Moscholios, and M. Logothetis, “A signature-based intrusion detection system for the internet of things,” Information and Communication Technology Form, 2018.
  49. V. Jyothsna, R. Prasad, and K. M. Prasad, “A review of anomaly based intrusion detection systems,” International Journal of Computer Applications, vol. 28, no. 7, pp. 26–35, 2011.
  50. S. Nedelkoski, J. Cardoso, and O. Kao, “Anomaly detection and classification using distributed tracing and deep learning,” in 2019 19th IEEE/ACM international symposium on cluster, cloud and grid computing (CCGRID).   IEEE, 2019, pp. 241–250.
  51. M. Rabbani, Y. Wang, R. Khoshkangini, H. Jelodar, R. Zhao, S. Bagheri Baba Ahmadi, and S. Ayobi, “A review on machine learning approaches for network malicious behavior detection in emerging technologies,” Entropy, vol. 23, no. 5, p. 529, 2021.
  52. N. A. S. Mirza, H. Abbas, F. A. Khan, and J. Al Muhtadi, “Anticipating advanced persistent threat (apt) countermeasures using collaborative security mechanisms,” in 2014 International Symposium on Biometrics and Security Technologies (ISBAST).   IEEE, 2014, pp. 129–132.
  53. M. Hassanin, M. Keshk, S. Salim, M. Alsubaie, and D. Sharma, “Pllm-cs: Pre-trained large language model (llm) for cyber threat detection in satellite networks,” arXiv preprint arXiv:2405.05469, 2024.
  54. L. Jiang, “Detecting scams using large language models,” arXiv preprint arXiv:2402.03147, 2024.
  55. Z. Shi, Y. Wang, F. Yin, X. Chen, K.-W. Chang, and C.-J. Hsieh, “Red teaming language model detectors with language models,” Transactions of the Association for Computational Linguistics, vol. 12, pp. 174–189, 2024.
  56. Y. Yang, X. Zhou, R. Mao, J. Xu, L. Yang, Y. Zhangm, H. Shen, and H. Zhang, “Dlap: A deep learning augmented large language model prompting framework for software vulnerability detection,” arXiv preprint arXiv:2405.01202, 2024.
  57. L. D. Manocchio, S. Layeghy, W. W. Lo, G. K. Kulatilleke, M. Sarhan, and M. Portmann, “Flowtransformer: A transformer framework for flow-based network intrusion detection systems,” Expert Systems with Applications, vol. 241, p. 122564, 2024.
  58. S. Sai, U. Yashvardhan, V. Chamola, and B. Sikdar, “Generative ai for cyber security: Analyzing the potential of chatgpt, dall-e and other models for enhancing the security space,” IEEE Access, 2024.
  59. S. G. Prasad, V. C. Sharmila, and M. Badrinarayanan, “Role of artificial intelligence based chat generative pre-trained transformer (chatgpt) in cyber security,” in 2023 2nd International Conference on Applied Artificial Intelligence and Computing (ICAAIC).   IEEE, 2023, pp. 107–114.
  60. S. A. Salloum, “Detecting malicious accounts in cyberspace: Enhancing security in chatgpt and beyond,” in Artificial Intelligence in Education: The Power and Dangers of ChatGPT in the Classroom.   Springer, 2024, pp. 653–666.
  61. K. Ameri, M. Hempel, H. Sharif, J. Lopez Jr, and K. Perumalla, “Cybert: Cybersecurity claim classification by fine-tuning the bert language model,” Journal of Cybersecurity and Privacy, vol. 1, no. 4, pp. 615–637, 2021.
  62. L. G. Nguyen and K. Watabe, “Flow-based network intrusion detection based on bert masked language model,” in Proceedings of the 3rd International CoNEXT Student Workshop, 2022, pp. 7–8.
  63. E. Nwafor and H. Olufowobi, “Canbert: A language-based intrusion detection model for in-vehicle networks,” in 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA).   IEEE, 2022, pp. 294–299.
  64. S. C. HPL, “Introduction to the controller area network (can),” Application Report SLOA101, pp. 1–17, 2002.
  65. M. Guastalla, Y. Li, A. Hekmati, and B. Krishnamachari, “Application of large language models to ddos attack detection,” in International Conference on Security and Privacy in Cyber-Physical Systems and Smart Vehicles.   Springer, 2023, pp. 83–99.
  66. K. Rieck and P. Laskov, “Language models for detection of unknown attacks in network traffic,” Journal in Computer Virology, vol. 2, pp. 243–256, 2007.
  67. A. R. Tuor, R. Baerwolf, N. Knowles, B. Hutchinson, N. Nichols, and R. Jasper, “Recurrent neural network language models for open vocabulary event-level cyber anomaly detection,” in Workshops at the thirty-second AAAI conference on artificial intelligence, 2018.
  68. N. Alkhatib, M. Mushtaq, H. Ghauch, and J.-L. Danger, “Can-bert do it? controller area network intrusion detection system based on bert language model,” in 2022 IEEE/ACS 19th International Conference on Computer Systems and Applications (AICCSA).   IEEE, 2022, pp. 1–8.
  69. F. Li, H. Shen, J. Mai, T. Wang, Y. Dai, and X. Miao, “Pre-trained language model-enhanced conditional generative adversarial networks for intrusion detection,” Peer-to-Peer Networking and Applications, vol. 17, no. 1, pp. 227–245, 2024.
  70. E. Aghaei, X. Niu, W. Shadid, and E. Al-Shaer, “Securebert: A domain-specific language model for cybersecurity,” in International Conference on Security and Privacy in Communication Systems.   Springer, 2022, pp. 39–56.
  71. B. Piggott, S. Patil, G. Feng, I. Odat, R. Mukherjee, B. Dharmalingam, and A. Liu, “Net-gpt: A llm-empowered man-in-the-middle chatbot for unmanned aerial vehicle,” in 2023 IEEE/ACM Symposium on Edge Computing (SEC).   IEEE, 2023, pp. 287–293.
  72. C. Patsakis, F. Casino, and N. Lykousas, “Assessing llms in malicious code deobfuscation of real-world malware campaigns,” arXiv preprint arXiv:2404.19715, 2024.
  73. H. Fujima, T. Kumamoto, and Y. Yoshida, “Using chatgpt to analyze ransomware messages and to predict ransomware threats,” 2023.
  74. G. Sandoval, H. Pearce, T. Nys, R. Karri, S. Garg, and B. Dolan-Gavitt, “Lost at c: A user study on the security implications of large language model code assistants,” in 32nd USENIX Security Symposium (USENIX Security 23), 2023, pp. 2205–2222.
  75. J. He and M. Vechev, “Large language models for code: Security hardening and adversarial testing,” in Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 2023, pp. 1865–1879.
  76. O. D. Okey, E. U. Udo, R. L. Rosa, D. Z. Rodríguez, and J. H. Kleinschmidt, “Investigating chatgpt and cybersecurity: A perspective on topic modeling and sentiment analysis,” Computers & Security, vol. 135, p. 103476, 2023.
  77. P. Sharma and B. Dash, “Impact of big data analytics and chatgpt on cybersecurity,” in 2023 4th International Conference on Computing and Communication Systems (I3CS).   IEEE, 2023, pp. 1–6.
  78. H. Lai, “Intrusion detection technology based on large language models,” in 2023 International Conference on Evolutionary Algorithms and Soft Computing Techniques (EASCT).   IEEE, 2023, pp. 1–5.
  79. M. Labonne and S. Moran, “Spam-t5: Benchmarking large language models for few-shot email spam detection,” arXiv preprint arXiv:2304.01238, 2023.
  80. S. S. Roy, P. Thota, K. V. Naragam, and S. Nilizadeh, “From chatbots to phishbots?–preventing phishing scams created using chatgpt, google bard and claude,” arXiv preprint arXiv:2310.19181, 2023.
  81. Y. Wu, S. Si, Y. Zhang, J. Gu, and J. Wosik, “Evaluating the performance of chatgpt for spam email detection,” arXiv preprint arXiv:2402.15537, 2024.
  82. F. Heiding, B. Schneier, A. Vishwanath, and J. Bernstein, “Devising and detecting phishing: Large language models vs. smaller human models,” arXiv preprint arXiv:2308.12287, 2023.
  83. Y. Li, C. Huang, S. Deng, M. L. Lock, T. Cao, N. Oo, B. Hooi, and H. W. Lim, “Knowphish: Large language models meet multimodal knowledge graphs for enhancing reference-based phishing detection,” arXiv preprint arXiv:2403.02253, 2024.
  84. M. Hur, S. Seo, J. Hwang, H. Lim, and M. Min, “Utilizing large language models for detection of sms spam in few-shot settings,” Available at SSRN 4815382.
  85. T. Koide, N. Fukushi, H. Nakano, and D. Chiba, “Chatspamdetector: Leveraging large language models for effective phishing email detection,” arXiv preprint arXiv:2402.18093, 2024.
  86. H. Patel, U. Rehman, and F. Iqbal, “Large language models spot phishing emails with surprising accuracy: A comparative analysis of performance,” arXiv preprint arXiv:2404.15485, 2024.
  87. F. Trad and A. Chehab, “Prompt engineering or fine-tuning? a case study on phishing detection with large language models,” Machine Learning and Knowledge Extraction, vol. 6, no. 1, pp. 367–384, 2024.
  88. D. Nahmias, G. Engelberg, D. Klein, and A. Shabtai, “Prompted contextual vectors for spear-phishing detection,” arXiv preprint arXiv:2402.08309, 2024.
  89. S. Jamal and H. Wimmer, “An improved transformer-based model for detecting phishing, spam, and ham: A large language model approach,” arXiv preprint arXiv:2311.04913, 2023.
  90. F. Heiding, B. Schneier, A. Vishwanath, J. Bernstein, and P. S. Park, “Devising and detecting phishing emails using large language models,” IEEE Access, 2024.
  91. F. Yu and M. V. Martin, “Honey, i chunked the passwords: Generating semantic honeywords resistant to targeted attacks using pre-trained language models,” in International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment.   Springer, 2023, pp. 89–108.
  92. R. Chataut, P. K. Gyawali, and Y. Usman, “Can ai keep you safe? a study of large language models for phishing detection,” in 2024 IEEE 14th Annual Computing and Communication Workshop and Conference (CCWC).   IEEE, 2024, pp. 0548–0554.
  93. P. Balasubramanian, J. Seby, and P. Kostakos, “Transformer-based llms in cybersecurity: An in-depth study on log anomaly detection and conversational defense mechanisms,” in 2023 IEEE International Conference on Big Data (BigData).   IEEE, 2023, pp. 3590–3599.
  94. M. A. Ferrag, M. Ndhlovu, N. Tihanyi, L. C. Cordeiro, M. Debbah, T. Lestable, and N. S. Thandi, “Revolutionizing cyber threat detection with large language models: A privacy-preserving bert-based lightweight model for iot/iiot devices,” IEEE Access, 2024.
  95. M. Bayer, P. Kuehn, R. Shanehsaz, and C. Reuter, “Cysecbert: A domain-adapted language model for the cybersecurity domain,” ACM Transactions on Privacy and Security, vol. 27, no. 2, pp. 1–20, 2024.
  96. P. Ranade, A. Piplai, A. Joshi, and T. Finin, “Cybert: Contextualized embeddings for the cybersecurity domain,” in 2021 IEEE International Conference on Big Data (Big Data).   IEEE, 2021, pp. 3334–3342.
  97. C. Wohlbach, M. M. Chowdhury, and S. Latif, “Evaluating cybersecurity risks in nlp models: Google bard as bard of prey and chatgpt as cyber crime aide,” Proceedings of 39th International Confer, vol. 98, pp. 159–168, 2024.
  98. A. Zaboli, S. L. Choi, T.-J. Song, and J. Hong, “Chatgpt and other large language models for cybersecurity of smart grid applications,” arXiv preprint arXiv:2311.05462, 2023.
  99. F. McKee and D. Noever, “Chatbots in a honeypot world,” arXiv preprint arXiv:2301.03771, 2023.
  100. T. Koide, N. Fukushi, H. Nakano, and D. Chiba, “Detecting phishing sites using chatgpt,” arXiv preprint arXiv:2306.05816, 2023.
  101. M. Sladić, V. Valeros, C. Catania, and S. Garcia, “Llm in the shell: Generative honeypots,” arXiv preprint arXiv:2309.00155, 2023.
  102. Y. Wang, X. Liang, X. Hei, W. Ji, and L. Zhu, “Deep learning data privacy protection based on homomorphic encryption in aiot,” Mobile Information Systems, vol. 2021, pp. 1–11, 2021.
  103. H. Kargupta, S. Datta, Q. Wang, and K. Sivakumar, “Random-data perturbation techniques and privacy-preserving data mining,” Knowledge and Information Systems, vol. 7, pp. 387–414, 2005.
  104. Y. Chen, A. Arunasalam, and Z. B. Celik, “Can large language models provide security & privacy advice? measuring the ability of llms to refute misconceptions,” in Proceedings of the 39th Annual Computer Security Applications Conference, 2023, pp. 366–378.
  105. S. Kim, S. Yun, H. Lee, M. Gubri, S. Yoon, and S. J. Oh, “Propile: Probing privacy leakage in large language models,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  106. M. Raeini, “Privacy-preserving large language models (ppllms),” Available at SSRN 4512071, 2023.
  107. M. Gupta, C. Akiri, K. Aryal, E. Parker, and L. Praharaj, “From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy,” IEEE Access, 2023.
  108. S. Singh, “Enhancing privacy and security in large-language models: A zero-knowledge proof approach,” in International Conference on Cyber Warfare and Security, vol. 19, no. 1, 2024, pp. 574–582.
  109. D. Galinec and L. Luić, “Design of conceptual model for raising awareness of digital threats,” WSEAS transactions on environment and development, vol. 16, pp. 493–504, 2020.
  110. A. Aleroud and L. Zhou, “Phishing environments, techniques, and countermeasures: A survey,” Computers & Security, vol. 68, pp. 160–196, 2017.
  111. M. Lagana, “Information security in an ever-changing threat landscape,” in The Routledge Companion to Risk, Crisis and Security in Business.   Routledge, 2018, pp. 255–271.
  112. T. Gundu, “Chatbots: A framework for improving information security behaviours using chatgpt,” in International Symposium on Human Aspects of Information Security and Assurance.   Springer, 2023, pp. 418–431.
  113. K. I. Roumeliotis and N. D. Tselikas, “Chatgpt and open-ai models: A preliminary review,” Future Internet, vol. 15, no. 6, p. 192, 2023.
  114. M. M. Yamin, E. Hashmi, M. Ullah, and B. Katt, “Applications of llms for generating cyber security exercise scenarios,” 2024.
  115. M. Kaheh, D. K. Kholgh, and P. Kostakos, “Cyber sentinel: Exploring conversational agents in streamlining security tasks with gpt-4,” arXiv preprint arXiv:2309.16422, 2023.
  116. K. S. Kalyan, “A survey of gpt-3 family large language models including chatgpt and gpt-4,” Natural Language Processing Journal, p. 100048, 2023.
  117. S. Shafee, A. Bessani, and P. M. Ferreira, “Evaluation of llm chatbots for osint-based cyberthreat awareness,” arXiv preprint arXiv:2401.15127, 2024.
  118. S. M. Mohammad and L. Surya, “Security automation in information technology,” International journal of creative research thoughts (IJCRT)–Volume, vol. 6, 2018.
  119. J. Kinyua and L. Awuah, “Ai/ml in security orchestration, automation and response: Future research directions.” Intelligent Automation & Soft Computing, vol. 28, no. 2, 2021.
  120. U. Bartwal, S. Mukhopadhyay, R. Negi, and S. Shukla, “Security orchestration, automation, and response engine for deployment of behavioural honeypots,” in 2022 IEEE Conference on Dependable and Secure Computing (DSC).   IEEE, 2022, pp. 1–8.
  121. S. Hays and D. J. White, “Employing llms for incident response planning and review,” arXiv preprint arXiv:2403.01271, 2024.
  122. R. Fang, R. Bindu, A. Gupta, and D. Kang, “Llm agents can autonomously exploit one-day vulnerabilities,” arXiv preprint arXiv:2404.08144, 2024.
  123. M. Feffer, A. Sinha, Z. C. Lipton, and H. Heidari, “Red-teaming for generative ai: Silver bullet or security theater?” arXiv preprint arXiv:2401.15897, 2024.
  124. O. G. Lira, A. Marroquin, and M. A. To, “Harnessing the advanced capabilities of llm for adaptive intrusion detection systems,” in International Conference on Advanced Information Networking and Applications.   Springer, 2024, pp. 453–464.
  125. M. Sultana, A. Taylor, L. Li, and S. Majumdar, “Towards evaluation and understanding of large language models for cyber operation automation,” in 2023 IEEE Conference on Communications and Network Security (CNS).   IEEE, 2023, pp. 1–6.
  126. G. Kokolakis, A. Moschos, and A. D. Keromytis, “Harnessing the power of general-purpose llms in hardware trojan design,” in Proceedings of the 5th Workshop on Artificial Intelligence in Hardware Security, in conjunction with ACNS, 2024.
  127. A. Helbling, M. Phute, M. Hull, and D. H. Chau, “Llm self defense: By self examination, llms know they are being tricked,” arXiv preprint arXiv:2308.07308, 2023.
  128. O. Gadyatskaya and D. Papuc, “Chatgpt knows your attacks: Synthesizing attack trees using llms,” in International Conference on Data Science and Artificial Intelligence.   Springer, 2023, pp. 245–260.
  129. M. Shao, B. Chen, S. Jancheska, B. Dolan-Gavitt, S. Garg, R. Karri, and M. Shafique, “An empirical evaluation of llms for solving offensive security challenges,” arXiv preprint arXiv:2402.11814, 2024.
  130. I. David, L. Zhou, K. Qin, D. Song, L. Cavallaro, and A. Gervais, “Do you still need a manual smart contract audit?” arXiv preprint arXiv:2306.12338, 2023.
  131. E. Cambiaso and L. Caviglione, “Scamming the scammers: Using chatgpt to reply mails for wasting time and resources,” arXiv preprint arXiv:2303.13521, 2023.
  132. T. Ali and P. Kostakos, “Huntgpt: Integrating machine learning-based anomaly detection and explainable ai with large language models (llms),” arXiv preprint arXiv:2309.16021, 2023.
  133. M. A. Uddin and I. H. Sarker, “An explainable transformer-based model for phishing email detection: A large language model approach,” arXiv preprint arXiv:2402.13871, 2024.
  134. G. Sebastian, “Do chatgpt and other ai chatbots pose a cybersecurity risk?: An exploratory study,” International Journal of Security and Privacy in Pervasive Computing (IJSPPC), vol. 15, no. 1, pp. 1–11, 2023.
  135. B. Garvey and A. Svendsen, “Can generative-ai (chatgpt and bard) be used as red team avatars in developing foresight scenarios?” Analytic Research Consortium (ARC) August, 2023.
  136. A. Kumar, A. Raghunathan, R. Jones, T. Ma, and P. Liang, “Fine-tuning can distort pretrained features and underperform out-of-distribution,” arXiv preprint arXiv:2202.10054, 2022.
  137. R. Dillon, P. Lothian, S. Grewal, and D. Pereira, “Cyber security: evolving threats in an ever-changing world,” in Digital Transformation in a Post-Covid World.   CRC Press, 2021, pp. 129–154.
  138. A. Razzaq, A. Hur, H. F. Ahmad, and M. Masood, “Cyber security: Threats, reasons, challenges, methodologies and state of the art solutions for industrial applications,” in 2013 IEEE Eleventh International Symposium on Autonomous Decentralized Systems (ISADS).   IEEE, 2013, pp. 1–6.
  139. R. Brewer, “Cyber threats: reducing the time to detection and response,” Network Security, vol. 2015, no. 5, pp. 5–8, 2015.
  140. B. Gordijn and H. t. Have, “Chatgpt: evolution or revolution?” Medicine, Health Care and Philosophy, vol. 26, no. 1, pp. 1–2, 2023.
  141. B. Yan, K. Li, M. Xu, Y. Dong, Y. Zhang, Z. Ren, and X. Cheng, “On protecting the data privacy of large language models (llms): A survey,” arXiv preprint arXiv:2403.05156, 2024.
  142. M. Hassanin, I. Radwan, N. Moustafa, M. Tahtali, and N. Kumar, “Mitigating the impact of adversarial attacks in very deep networks,” Applied Soft Computing, vol. 105, p. 107231, 2021.
  143. S. Moore, R. Tong, A. Singh, Z. Liu, X. Hu, Y. Lu, J. Liang, C. Cao, H. Khosravi, P. Denny et al., “Empowering education with llms-the next-gen interface and content generation,” in International Conference on Artificial Intelligence in Education.   Springer, 2023, pp. 32–37.
  144. M. Sallam, “Chatgpt utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns,” in Healthcare, vol. 11, no. 6.   MDPI, 2023, p. 887.
  145. L. Chen, O. Sinavski, J. Hünermann, A. Karnsund, A. J. Willmott, D. Birch, D. Maund, and J. Shotton, “Driving with llms: Fusing object-level vector modality for explainable autonomous driving,” arXiv preprint arXiv:2310.01957, 2023.
  146. A. Zytek, S. Pidò, and K. Veeramachaneni, “Llms for xai: Future directions for explaining explanations,” arXiv preprint arXiv:2405.06064, 2024.
  147. K. Andriopoulos and J. Pouwelse, “Augmenting llms with knowledge: A survey on hallucination prevention,” arXiv preprint arXiv:2309.16459, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Mohammed Hassanin (9 papers)
  2. Nour Moustafa (23 papers)
Citations (13)
X Twitter Logo Streamline Icon: https://streamlinehq.com