Concept-Guided LLM Agents for Human-AI Safety Codesign (2404.15317v1)
Abstract: Generative AI is increasingly important in software engineering, including safety engineering, where its use ensures that software does not cause harm to people. This also leads to high quality requirements for generative AI. Therefore, the simplistic use of LLMs alone will not meet these quality demands. It is crucial to develop more advanced and sophisticated approaches that can effectively address the complexities and safety concerns of software systems. Ultimately, humans must understand and take responsibility for the suggestions provided by generative AI to ensure system safety. To this end, we present an efficient, hybrid strategy to leverage LLMs for safety analysis and Human-AI codesign. In particular, we develop a customized LLM agent that uses elements of prompt engineering, heuristic reasoning, and retrieval-augmented generation to solve tasks associated with predefined safety concepts, in interaction with a system model graph. The reasoning is guided by a cascade of micro-decisions that help preserve structured information. We further suggest a graph verbalization which acts as an intermediate representation of the system model to facilitate LLM-graph interactions. Selected pairs of prompts and responses relevant for safety analytics illustrate our method for the use case of a simplified automated driving system.
- Basic concepts and taxonomy of dependable and secure computing. IEEE Transactions on Dependable and Secure Computing, 1(1): 11–33.
- Language models are few-shot learners. Advances in Neural Information Processing Systems, 2020-Decem.
- Carnegie Mellon University. 2023. OSATE 2.13. https://osate.org/. Accessed: 2023-12-01.
- Carrera, E. 2021. pydot. https://pypi.org/project/pydot/. Accessed: 2023-12-01.
- Can Large Language Models Assist in Hazard Analysis? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), volume 14182 LNCS, 410–422. ISBN 9783031409523.
- Dijkstra, E. W. 1959. A note on two problems in connexion with graphs. Numer. Math., 271: 269–271.
- Integrating Safety Analyses and Component-Based Design. In Harrison, M. D.; and Sujan, M.-A., eds., Computer Safety, Reliability, and Security, 58–71. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-540-87698-4.
- Harrison Chase. 2022. LangChain. https://github.com/langchain-ai/langchain. Accessed: 2023-12-01.
- Mistral 7B. arXiv:2310.06825.
- Large Language Models on Graphs: A Comprehensive Survey. arXiv:2312.02783.
- OpenAI. 2023. ChatGPT 3.5-turbo. https://openai.com/. Accessed: 2023-12-01.
- Unifying Large Language Models and Knowledge Graphs: A Roadmap. arXiv:2306.08302.
- Reuters. 2023. ChatGPT sets record for fastest-growing user base - analyst note. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/. Accessed: 2023-12-01.
- GraphGPT: Graph Instruction Tuning for Large Language Models. arXiv:2310.13023.
- Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv:2307.09288.
- Trapp, M. 2016. Assuring Functional Safety in Open Systems of Systems. https://nbn-resolving.de/urn:nbn:de:hbz:386-kluedo-44221.
- Attention is all you need. Advances in Neural Information Processing Systems, 2017-Decem: 5999–6009.
- Can Language Models Solve Graph Problems in Natural Language? arXiv:2305.10037.
- Florian Geissler (20 papers)
- Karsten Roscher (9 papers)
- Mario Trapp (4 papers)