Asset-centric Threat Modeling for AI-based Systems (2403.06512v2)
Abstract: Threat modeling is a popular method to securely develop systems by achieving awareness of potential areas of future damage caused by adversaries. However, threat modeling for systems relying on Artificial Intelligence is still not well explored. While conventional threat modeling methods and tools did not address AI-related threats, research on this amalgamation still lacks solutions capable of guiding and automating the process, as well as providing evidence that the methods hold up in practice. Consequently, this paper presents ThreatFinderAI, an approach and tool providing guidance and automation to model AI-related assets, threats, countermeasures, and quantify residual risks. To evaluate the practicality of the approach, participants were tasked to recreate a threat model developed by cybersecurity experts of an AI-based healthcare platform. Secondly, the approach was used to identify and discuss strategic risks in an LLM-based application through a case study. Overall, the solution's usability was well-perceived and effectively supports threat identification and risk discussion.
- L. Perri, Gartner Inc., “What’s New in Artificial Intelligence from the 2023 Gartner Hype Cycle,” August 2023, https://www.gartner.com/en/articles/what-s-new-in-artificial-intelligence-from-the-2023-gartner-hype-cycle, Last Visit January 2024.
- Nokia Corporation, “6G explained,” January 2024, https://www.nokia.com/about-us/newsroom/articles/6g-explained, Last Visit January 2024.
- K. Hu, Reuters, “ChatGPT sets record for fastest-growing user base - analyst note,” February 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/, Last Visit January 2024.
- A. Kucharavy, Z. Schillaci, L. Maréchal, M. Würsch, L. Dolamic, R. Sabonnadiere, D. P. David, A. Mermoud, and V. Lenders, “Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense,” arXiv preprint https://arxiv.org/abs/2303.12132, March 2023.
- P. Dixit, engadget, “A ’silly’ attack made ChatGPT reveal real phone numbers and email addresses,” November 2023, https://www.engadget.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649.html, Last Visit January 2024.
- X. Wang, J. Li, X. Kuang, Y. an Tan, and J. Li, “The security of machine learning in an adversarial setting: A survey,” Journal of Parallel and Distributed Computing, vol. 130, pp. 12–23, 2019.
- N. Akhtar and A. Mian, “Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey,” IEEE Access, vol. 6, pp. 14 410–14 430, 2018.
- OWASP, “Software Assurance Maturity Model,” September 2023, https://owasp.org/www-project-samm/, Last Visit January 2024.
- von der Assen, J. and Franco, M. F. and Killer, C. and Scheid, E. J. and Stiller, Burkhard, “CoReTM: An Approach Enabling Cross-Functional Collaborative Threat Modeling,” in IEEE International Conference on Cyber Security and Resilience (CSR 2022), Rhodes, Greece, July 2022, pp. 1–8.
- L. Mauri and E. Damiani, “Stride-ai: An approach to identifying vulnerabilities of machine learning assets,” in 2021 IEEE International Conference on Cyber Security and Resilience (CSR). IEEE, 2021, pp. 147–154.
- C. Wilhjelm and A. A. Younis, “A Threat Analysis Methodology for Security Requirements Elicitation in Machine Learning Based Systems,” in 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C), 2020, pp. 426–433.
- L. Mauri and E. Damiani, “Estimating Degradation of Machine Learning Data Assets,” ACM Journal of Data and Information Quality (JDIQ), vol. 14, no. 2, pp. 1–15, 2021.
- E. Habler, R. Bitton, D. Avraham, D. Mimran, E. Klevansky, O. Brodt, H. Lehmann, Y. Elovici, and A. Shabtai, “Adversarial Machine Learning Threat Analysis and Remediation in Open Radio Access Network (O-RAN),” arXiv preprint https://arxiv.org/abs/2201.06093, March 2023.
- L. Mauri and E. Damiani, “Modeling threats to AI-ML systems using STRIDE,” Sensors, vol. 22, no. 17, 2022.
- B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Artificial intelligence and statistics, 2017, pp. 1273–1282.
- R. S. Sangwan, Y. Badr, and S. M. Srinivasan, “Cybersecurity for AI Systems: A Survey,” Journal of Cybersecurity and Privacy, vol. 3, no. 2, pp. 166–190, 2023.
- L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C. Lupu, and F. Roli, “Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization,” in Proceedings of the 10th ACM workshop on artificial intelligence and security, 2017, pp. 27–38.
- B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia, “Exploiting Machine Learning to Subvert Your Spam Filter,” LEET, vol. 8, no. 1-9, pp. 16–17, 2008.
- Ilmoi, “Poisoning attacks on Machine Learning,” July 2019, https://towardsdatascience.com/poisoning-attacks-on-machine-learning-1ff247c254db/, Last Visit January 2024.
- J. Natarajan, “Cyber Secure Man-in-the-Middle Attack Intrusion Detection Using Machine Learning Algorithms,” in AI and Big Data’s Potential for Disruptive Innovation, January 2020, pp. 291–316.
- R. N. Reith, T. Schneider, and O. Tkachenko, “Efficiently Stealing your Machine Learning Models,” in Proceedings of the 18th ACM Workshop on Privacy in the Electronic Society, 2019, pp. 198–210.
- CAIRIS, “Threat Modelling, Documentation and More,” 2022, https://cairis.org/cairis/tmdocsmore/, Last Visit January 2024.
- Threatspec, “Threatspec,” June 2019, https://threatspec.org/, Last Visit January 2024.
- SecurityCompass, “SD Elements Datasheet v5.17,” 2023, https://docs.sdelements.com/release/latest/guide/docs/datasheet.html/, Last Visit January 2024.
- Tutamantic, “Feauture — Tutamantic,” January 2021, https://www.tutamantic.com/page/features, Last Visit January 2024.
- JGraph Ltd, “Diagram Software and Flowchart Maker,” https://www.diagrams.net/, Last Visit January 2024.
- The MITRE Corporation, “MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge),” https://attack.mitre.org/, Last Visit January 2024.
- ——, “MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems),” https://atlas.mitre.org/, Last Visit January 2024.
- Microsoft Corporation. Threat Modeling for AI/ML Systems and Dependencies. https://learn.microsoft.com/en-us/security/engineering/threat-modeling-aiml, Last Visit January 2024.
- A. Marshall, J. Parikh, E. Kiciman, and R. Kumar, “Threat Modeling AI/ML Systems and Dependencies,” Security documentation, 2019.
- OWASP, “AI Security and Privacy Guide,” https://owasp.org/www-project-ai-security-and-privacy-guide/#how-to-deal-with-ai-security, Last Visit January 2024.
- European Union Agency for Cybersecurity (ENISA), “Securing Machine Learning Algorithms,” 2021.
- ——, “Artificial Intelligence Cybersecurity Challenges, Threat Landscape for Artificial Intelligence,” 2020.
- S. Myagmar, A. J. Lee, and W. Yurcik, “Threat Modeling as a Basis for Security Requirements,” in Symposium on Requirements Engineering for Information Security (SREIS), August 2005.
- M. F. Franco, F. Künzler, J. von der Assen, C. Feng, and B. Stiller, “RCVaR: an Economic Approach to Estimate Cyberattacks Costs using Data from Industry Reports,” arXiv preprint https://arxiv.org/abs/2307.11140, July 2023.
- Meta Platforms, “React,” https://react.dev/, Last Visit January 2024.
- Sharif Jamo, “AiThreats,” 2024, https://github.com/JSha91/AiThreats.
- Sharif Jamo and von der Assen Jan, “ThreatFinder,” 2024, https://www.csg.uzh.ch/threatfinder/.
- Threat Modeling Manifesto Working Group, “Threat Modeling Manifesto,” January 2024, https://www.threatmodelingmanifesto.org, Last Visit January 2024.
- GitLab Inc., “System Usability Scale (SUS),” 2023, https://handbook.gitlab.com/handbook/product/ux/performance-indicators/system-usability-scale, Last Visit January 2024.
- Jan von der Assen (17 papers)
- Jamo Sharif (1 paper)
- Chao Feng (101 papers)
- Gérôme Bovet (56 papers)
- Burkhard Stiller (39 papers)
- Christian Killer (5 papers)