AutoAttacker: A Large Language Model Guided System to Implement Automatic Cyber-attacks (2403.01038v1)
Abstract: LLMs have demonstrated impressive results on natural language tasks, and security researchers are beginning to employ them in both offensive and defensive systems. In cyber-security, there have been multiple research efforts that utilize LLMs focusing on the pre-breach stage of attacks like phishing and malware generation. However, so far there lacks a comprehensive study regarding whether LLM-based systems can be leveraged to simulate the post-breach stage of attacks that are typically human-operated, or "hands-on-keyboard" attacks, under various attack techniques and environments. As LLMs inevitably advance, they may be able to automate both the pre- and post-breach attack stages. This shift may transform organizational attacks from rare, expert-led events to frequent, automated operations requiring no expertise and executed at automation speed and scale. This risks fundamentally changing global computer security and correspondingly causing substantial economic impacts, and a goal of this work is to better understand these risks now so we can better prepare for these inevitable ever-more-capable LLMs on the horizon. On the immediate impact side, this research serves three purposes. First, an automated LLM-based, post-breach exploitation framework can help analysts quickly test and continually improve their organization's network security posture against previously unseen attacks. Second, an LLM-based penetration test system can extend the effectiveness of red teams with a limited number of human analysts. Finally, this research can help defensive systems and teams learn to detect novel attack behaviors preemptively before their use in the wild....
- S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg, H. Nori, H. Palangi, M. T. Ribeiro, and Y. Zhang, “Sparks of artificial general intelligence: Early experiments with gpt-4,” 2023.
- M. Schreiner, “Gpt-4 architecture, datasets, costs and more leaked,” THE DECODER, 2023.
- J. Li, T. Tang, W. X. Zhao, J.-Y. Nie, and J.-R. Wen, “Pretrained language models for text generation: A survey,” 2022.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- Q. Wang, B. Li, T. Xiao, J. Zhu, C. Li, D. F. Wong, and L. S. Chao, “Learning deep transformer models for machine translation,” arXiv preprint arXiv:1906.01787, 2019.
- Z. Jiang, J. Araki, H. Ding, and G. Neubig, “How can we know when language models know? on the calibration of language models for question answering,” Transactions of the Association for Computational Linguistics, vol. 9, pp. 962–977, 2021.
- T. Zhang, F. Ladhak, E. Durmus, P. Liang, K. McKeown, and T. B. Hashimoto, “Benchmarking large language models for news summarization,” arXiv preprint arXiv:2301.13848, 2023.
- D. Araci, “Finbert: Financial sentiment analysis with pre-trained language models,” arXiv preprint arXiv:1908.10063, 2019.
- M. Fu, C. Tantithamthavorn, V. Nguyen, and T. Le, “Chatgpt for vulnerability detection, classification, and repair: How far are we?” arXiv preprint arXiv:2310.09810, 2023.
- “What is microsoft security copilot?” https://learn.microsoft.com/en-us/security-copilot/microsoft-security-copilot, Oct. 2023, accessed: 2024-01-24.
- M. Kaheh, D. K. Kholgh, and P. Kostakos, “Cyber sentinel: Exploring conversational agents in streamlining security tasks with gpt-4,” arXiv preprint arXiv:2309.16422, 2023.
- J. Hazell, “Large language models can be used to effectively scale spear phishing campaigns,” arXiv preprint arXiv:2305.06972, 2023.
- N. Begou, J. Vinoy, A. Duda, and M. Korczyński, “Exploring the dark side of ai: Advanced phishing attack design and deployment using chatgpt,” in 2023 IEEE Conference on Communications and Network Security (CNS). IEEE, 2023, pp. 1–6.
- M. Gupta, C. Akiri, K. Aryal, E. Parker, and L. Praharaj, “From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy,” IEEE Access, 2023.
- Lockheed Martin, “Cyber kill chain,” https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html, 2019.
- Secureworks, “Hands on keyboard,” https://docs.ctpx.secureworks.com/detectors/hands_on_keyboard/.
- G. Deng, Y. Liu, V. Mayoral-Vilches, P. Liu, Y. Li, Y. Xu, T. Zhang, Y. Liu, M. Pinzger, and S. Rass, “Pentestgpt: An llm-empowered automatic penetration testing tool,” arXiv preprint arXiv:2308.06782, 2023.
- A. Happe and J. Cito, “Getting pwn’d by ai: Penetration testing with large language models,” in Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2023, pp. 2082–2086.
- A. Happe, A. Kaplan, and J. Cito, “Evaluating llms for privilege-escalation scenarios,” arXiv preprint arXiv:2310.11409, 2023.
- P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel et al., “Retrieval-augmented generation for knowledge-intensive nlp tasks,” Advances in Neural Information Processing Systems, vol. 33, pp. 9459–9474, 2020.
- GreyDGL, “Pentestgpt: A gpt-empowered penetration testing tool.” https://github.com/GreyDGL/PentestGPT.
- ipa-lab, “create vms with priv-esc vulnerabilities,” https://github.com/ipa-lab/hacking-benchmark.
- DARPA, “Darpa’s cyber grand challenge (cgc) (archived),” https://www.darpa.mil/program/cyber-grand-challenge, 2013.
- ——, “Darpa’s artificial intelligence cyber challenge (aixcc),” https://aicyberchallenge.com/, 2023.
- Y. Shoshitaishvili, R. Wang, C. Salls, N. Stephens, M. Polino, A. Dutcher, J. Grosen, S. Feng, C. Hauser, C. Kruegel et al., “Sok:(state of) the art of war: Offensive techniques in binary analysis,” in 2016 IEEE symposium on security and privacy (SP). IEEE, 2016, pp. 138–157.
- Google, “Fuzzing with afl-fuzz,” https://afl-1.readthedocs.io/en/latest/fuzzing.html.
- A. Fioraldi, D. Maier, H. Eißfeldt, and M. Heuse, “{{\{{AFL++}}\}}: Combining incremental steps of fuzzing research,” in 14th USENIX Workshop on Offensive Technologies (WOOT 20), 2020.
- A. Fioraldi, D. C. Maier, D. Zhang, and D. Balzarotti, “Libafl: A framework to build modular and reusable fuzzers,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, pp. 1051–1065.
- T. Avgerinos, S. K. Cha, A. Rebert, E. J. Schwartz, M. Woo, and D. Brumley, “Automatic exploit generation,” Communications of the ACM, vol. 57, no. 2, pp. 74–84, 2014.
- S. K. Cha, T. Avgerinos, A. Rebert, and D. Brumley, “Unleashing mayhem on binary code,” in 2012 IEEE Symposium on Security and Privacy. IEEE, 2012, pp. 380–394.
- M. Bishop, “About penetration testing,” IEEE Security & Privacy, vol. 5, no. 6, pp. 84–87, 2007.
- X. Qiu, S. Wang, Q. Jia, C. Xia, and Q. Xia, “An automated method of penetration testing,” in 2014 IEEE Computers, Communications and IT Applications Conference, 2014, pp. 211–216.
- J. Zhao, W. Shang, M. Wan, and P. Zeng, “Penetration testing automation assessment method based on rule tree,” in 2015 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015, pp. 1829–1833.
- Z. Hu, R. Beuran, and Y. Tan, “Automated penetration testing using deep reinforcement learning,” in 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW). IEEE, 2020, pp. 2–10.
- S. Y. Enoch, Z. Huang, C. Y. Moon, D. Lee, M. K. Ahn, and D. S. Kim, “Harmer: Cyber-attacks automation and evaluation,” IEEE Access, vol. 8, pp. 129 397–129 414, 2020.
- G. Falco, A. Viswanathan, C. Caldera, and H. Shrobe, “A master attack methodology for an ai-based automated attack planner for smart cities,” IEEE Access, vol. 6, pp. 48 360–48 373, 2018.
- B. E. Strom, A. Applebaum, D. P. Miller, K. C. Nickels, A. G. Pennington, and C. B. Thomas, “Mitre att&ck: Design and philosophy,” in Technical report. The MITRE Corporation, 2018.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2017. [Online]. Available: https://arxiv.org/pdf/1706.03762.pdf
- J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat et al., “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
- H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale et al., “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288, 2023.
- A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier et al., “Mistral 7b,” arXiv preprint arXiv:2310.06825, 2023.
- H. R. Saeidnia, “Welcome to the gemini era: Google deepmind and the information industry,” Library Hi Tech News, 2023.
- E. Lozić and B. Štular, “Chatgpt v bard v bing v claude 2 v aria v human-expert. how good are ai chatbots at scientific writing?(ver. 23q3),” arXiv preprint arXiv:2309.08636, 2023.
- J. Li, G. Li, C. Tao, H. Zhang, F. Liu, and Z. Jin, “Large language model-aware in-context learning for code generation,” 2023.
- J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou et al., “Chain-of-thought prompting elicits reasoning in large language models,” Advances in Neural Information Processing Systems, vol. 35, pp. 24 824–24 837, 2022.
- J. Liu, A. Liu, X. Lu, S. Welleck, P. West, R. L. Bras, Y. Choi, and H. Hajishirzi, “Generated knowledge prompting for commonsense reasoning,” arXiv preprint arXiv:2110.08387, 2021.
- S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao, and K. Narasimhan, “Tree of thoughts: Deliberate problem solving with large language models,” arXiv preprint arXiv:2305.10601, 2023.
- H. Liu, C. Sferrazza, and P. Abbeel, “Languages are rewards: Hindsight finetuning using human feedback,” arXiv preprint arXiv:2302.02676, 2023.
- N. Shinn, B. Labash, and A. Gopinath, “Reflexion: an autonomous agent with dynamic memory and self-reflection,” arXiv preprint arXiv:2303.11366, 2023.
- S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, “React: Synergizing reasoning and acting in language models,” arXiv preprint arXiv:2210.03629, 2022.
- Z. Xi, W. Chen, X. Guo, W. He, Y. Ding, B. Hong, M. Zhang, J. Wang, S. Jin, E. Zhou et al., “The rise and potential of large language model based agents: A survey,” arXiv preprint arXiv:2309.07864, 2023.
- G. Wang, Y. Xie, Y. Jiang, A. Mandlekar, C. Xiao, Y. Zhu, L. Fan, and A. Anandkumar, “Voyager: An open-ended embodied agent with large language models,” arXiv preprint arXiv:2305.16291, 2023.
- L. Fan, G. Wang, Y. Jiang, A. Mandlekar, Y. Yang, H. Zhu, A. Tang, D.-A. Huang, Y. Zhu, and A. Anandkumar, “Minedojo: Building open-ended embodied agents with internet-scale knowledge,” in Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. [Online]. Available: https://openreview.net/forum?id=rc8o_j8I8PX
- Z. Liu, W. Yao, J. Zhang, L. Xue, S. Heinecke, R. Murthy, Y. Feng, Z. Chen, J. C. Niebles, D. Arpit et al., “Bolaa: Benchmarking and orchestrating llm-augmented autonomous agents,” arXiv preprint arXiv:2308.05960, 2023.
- T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom, “Toolformer: Language models can teach themselves to use tools,” arXiv preprint arXiv:2302.04761, 2023.
- Y. Yao, J. Duan, K. Xu, Y. Cai, E. Sun, and Y. Zhang, “A survey on large language model (llm) security and privacy: The good, the bad, and the ugly,” arXiv preprint arXiv:2312.02003, 2023.
- C. S. Xia, M. Paltenghi, J. L. Tian, M. Pradel, and L. Zhang, “Universal fuzzing via large language models,” arXiv preprint arXiv:2308.04748, 2023.
- R. Meng, M. Mirchev, M. Böhme, and A. Roychoudhury, “Large language model guided protocol fuzzing,” in Proceedings of the 31st Annual Network and Distributed System Security Symposium (NDSS), 2024.
- H. Pearce, B. Tan, B. Ahmad, R. Karri, and B. Dolan-Gavitt, “Examining zero-shot vulnerability repair with large language models,” in 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023, pp. 2339–2356.
- F. Albanese, D. Ciolek, and N. D’Ippolito, “Text sanitization beyond specific domains: Zero-shot redaction & substitution with large language models,” arXiv preprint arXiv:2311.10785, 2023.
- M. Beckerich, L. Plein, and S. Coronado, “Ratgpt: Turning online llms into proxies for malware attacks,” arXiv preprint arXiv:2308.09183, 2023.
- M. Botacin, “Gpthreats-3: Is automatic malware generation a threat?” in 2023 IEEE Security and Privacy Workshops (SPW). IEEE, 2023, pp. 238–254.
- F. Heiding, B. Schneier, A. Vishwanath, and J. Bernstein, “Devising and detecting phishing: Large language models vs. smaller human models,” arXiv preprint arXiv:2308.12287, 2023.
- R. Staab, M. Vero, M. Balunović, and M. Vechev, “Beyond memorization: Violating privacy via inference with large language models,” arXiv preprint arXiv:2310.07298, 2023.
- P. V. Falade, “Decoding the threat landscape: Chatgpt, fraudgpt, and wormgpt in social engineering attacks,” arXiv preprint arXiv:2310.05595, 2023.
- W. Tann, Y. Liu, J. H. Sim, C. M. Seah, and E.-C. Chang, “Using large language models for cybersecurity capture-the-flag challenges and certification questions,” arXiv preprint arXiv:2308.10443, 2023.
- J. Yang, A. Prabhakar, S. Yao, K. Pei, and K. R. Narasimhan, “Language agents as hackers: Evaluating cybersecurity skills with capture the flag,” in Multi-Agent Security Workshop@ NeurIPS’23, 2023.
- H. Yao, J. Lou, and Z. Qin, “Poisonprompt: Backdoor attack on prompt-based large language models,” arXiv preprint arXiv:2310.12439, 2023.
- T. Liu, Z. Deng, G. Meng, Y. Li, and K. Chen, “Demystifying rce vulnerabilities in llm-integrated apps,” arXiv preprint arXiv:2309.02926, 2023.
- A. Wei, N. Haghtalab, and J. Steinhardt, “Jailbroken: How does llm safety training fail?” arXiv preprint arXiv:2307.02483, 2023.
- H. Li, D. Guo, W. Fan, M. Xu, and Y. Song, “Multi-step jailbreaking privacy attacks on chatgpt,” arXiv preprint arXiv:2304.05197, 2023.
- X. Shen, Z. Chen, M. Backes, Y. Shen, and Y. Zhang, “” do anything now”: Characterizing and evaluating in-the-wild jailbreak prompts on large language models,” arXiv preprint arXiv:2308.03825, 2023.
- P. Chao, A. Robey, E. Dobriban, H. Hassani, G. J. Pappas, and E. Wong, “Jailbreaking black box large language models in twenty queries,” arXiv preprint arXiv:2310.08419, 2023.
- J. Yu, X. Lin, and X. Xing, “Gptfuzzer: Red teaming large language models with auto-generated jailbreak prompts,” arXiv preprint arXiv:2309.10253, 2023.
- Z. Wei, Y. Wang, and Y. Wang, “Jailbreak and guard aligned language models with only few in-context demonstrations,” arXiv preprint arXiv:2310.06387, 2023.
- N. Kandpal, M. Jagielski, F. Tramèr, and N. Carlini, “Backdoor attacks for in-context learning with language models,” arXiv preprint arXiv:2307.14692, 2023.
- A. Robey, E. Wong, H. Hassani, and G. J. Pappas, “Smoothllm: Defending large language models against jailbreaking attacks,” arXiv preprint arXiv:2310.03684, 2023.
- B. Chen, A. Paliwal, and Q. Yan, “Jailbreaker in jail: Moving target defense for large language models,” in Proceedings of the 10th ACM Workshop on Moving Target Defense, 2023, pp. 29–32.
- G. Costantino, A. La Marra, F. Martinelli, and I. Matteucci, “Candy: A social engineering attack to leak information from infotainment system,” in 2018 IEEE 87th Vehicular Technology Conference (VTC Spring). IEEE, 2018, pp. 1–5.
- A. Sobieszek and T. Price, “Playing games with ais: the limits of gpt-3 and similar large language models,” Minds and Machines, vol. 32, no. 2, pp. 341–364, 2022.
- P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, “Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing,” ACM Computing Surveys, vol. 55, no. 9, pp. 1–35, 2023.
- HTB, “Hackthebox: Hacking training for the best,” https://www.hackthebox.com/.
- Gelei Deng, “Pentestgpt solves jarvis - part 1,” https://www.youtube.com/watch?v=lAjLIj1JT3c.
- Rapid7, “How to use a reverse shell in metasploit,” https://docs.metasploit.com/docs/using-metasploit/basics/how-to-use-a-reverse-shell-in-metasploit.html.
- G. Deng, Y. Liu, Y. Li, K. Wang, Y. Zhang, Z. Li, H. Wang, T. Zhang, and Y. Liu, “Masterkey: Automated jailbreak across multiple large language model chatbots,” in The Network and Distributed System Security Symposium (NDSS), vol. 2023, 2024.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language models are few-shot learners,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33. Curran Associates, Inc., 2020, pp. 1877–1901.
- R. Bellman, “A markovian decision process,” in Journal of Mathematics and Mechanics, vol. 6, 1957, p. 679–684.
- J. Ruan, Y. Chen, B. Zhang, Z. Xu, T. Bao, G. Du, S. Shi, H. Mao, X. Zeng, and R. Zhao, “Tptu: Task planning and tool usage of large language model-based ai agents,” arXiv preprint arXiv:2308.03427, 2023.
- Microsoft, “Virtual machine automation and management using powershell,” https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/powershell-direct.
- Deep Infra, “Machine learning models and infrastructure,” https://deepinfra.com/.
- X. Wang, X. Tang, W. X. Zhao, J. Wang, and J.-R. Wen, “Rethinking the evaluation for conversational recommendation in the era of large language models,” arXiv preprint arXiv:2305.13112, 2023.
- “thinkgpt,” https://github.com/jina-ai/thinkgpt, 2023.
- O. Topsakal and T. C. Akinci, “Creating large language model applications utilizing langchain: A primer on developing llm apps fast,” in Proceedings of the International Conference on Applied Engineering and Natural Sciences, Konya, Turkey, 2023, pp. 10–12.
- F. Wan, X. Huang, D. Cai, X. Quan, W. Bi, and S. Shi, “Knowledge fusion of large language models,” arXiv preprint arXiv:2401.10491, 2024.
- Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu, X. Huang, E. Zhao, Y. Zhang, Y. Chen et al., “Siren’s song in the ai ocean: A survey on hallucination in large language models,” arXiv preprint arXiv:2309.01219, 2023.
- A. Thudi, H. Jia, I. Shumailov, and N. Papernot, “On the necessity of auditable algorithmic definitions for machine unlearning,” in 31st USENIX Security Symposium (USENIX Security 22), 2022, pp. 4007–4022.
- G. Jacob, R. Hund, C. Kruegel, and T. Holz, “{{\{{JACKSTRAWS}}\}}: Picking command and control connections from bot traffic,” in 20th USENIX Security Symposium (USENIX Security 11), 2011.
- G. Gu, J. Zhang, and W. Lee, “Botsniffer: Detecting botnet command and control channels in network traffic,” 2008.
- L. Bilge, D. Balzarotti, W. Robertson, E. Kirda, and C. Kruegel, “Disclosure: detecting botnet command and control servers through large-scale netflow analysis,” in Proceedings of the 28th Annual Computer Security Applications Conference, 2012, pp. 129–138.
- Y. Chen, Q. Fu, Y. Yuan, Z. Wen, G. Fan, D. Liu, D. Zhang, Z. Li, and Y. Xiao, “Hallucination detection: Robustly discerning reliable answers in large language models,” in Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 2023, pp. 245–255.
- J. Li, X. Cheng, W. X. Zhao, J.-Y. Nie, and J.-R. Wen, “Helma: A large-scale hallucination evaluation benchmark for large language models,” 2023.
- S. McLean, G. J. Read, J. Thompson, C. Baber, N. A. Stanton, and P. M. Salmon, “The risks associated with artificial general intelligence: A systematic review,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 35, no. 5, pp. 649–663, 2023.
- S. Liao, C. Zhou, Y. Zhao, Z. Zhang, C. Zhang, Y. Gao, and G. Zhong, “A comprehensive detection approach of nmap: Principles, rules and experiments,” in 2020 international conference on cyber-enabled distributed computing and knowledge discovery (CyberC). IEEE, 2020, pp. 64–71.
- A. Sarabi, T. Yin, and M. Liu, “An llm-based framework for fingerprinting internet-connected devices,” in Proceedings of the 2023 ACM on Internet Measurement Conference, 2023, pp. 478–484.
- M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman et al., “Evaluating large language models trained on code,” arXiv preprint arXiv:2107.03374, 2021.
- S. Malik and E. Azeem, “The secrets to mimikatz-the credential dumper,” International Journal for Electronic Crime Investigation, vol. 5, no. 4, pp. 27–34, 2021.
- Y. Yang, Q. Zhang, C. Li, D. S. Marta, N. Batool, and J. Folkesson, “Human-centric autonomous systems with llms for user command reasoning,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 988–994.
- Jiacen Xu (8 papers)
- Jack W. Stokes (16 papers)
- Geoff McDonald (3 papers)
- Xuesong Bai (8 papers)
- David Marshall (7 papers)
- Siyue Wang (16 papers)
- Adith Swaminathan (28 papers)
- Zhou Li (49 papers)