Root Cause Analysis of Radiation Oncology Incidents Using Large Language Models (2508.17201v1)
Abstract: Purpose To evaluate the reasoning capabilities of LLMs in performing root cause analysis (RCA) of radiation oncology incidents using narrative reports from the Radiation Oncology Incident Learning System (RO-ILS), and to assess their potential utility in supporting patient safety efforts. Methods and Materials Four LLMs, Gemini 2.5 Pro, GPT-4o, o3, and Grok 3, were prompted with the 'Background and Incident Overview' sections of 19 public RO-ILS cases. Using a standardized prompt based on AAPM RCA guidelines, each model was instructed to identify root causes, lessons learned, and suggested actions. Outputs were assessed using semantic similarity metrics (cosine similarity via Sentence Transformers), semi-subjective evaluations (precision, recall, F1-score, accuracy, hallucination rate, and four performance criteria: relevance, comprehensiveness, justification, and solution quality), and subjective expert ratings (reasoning quality and overall performance) from five board-certified medical physicists. Results LLMs showed promising performance. GPT-4o had the highest cosine similarity (0.831), while Gemini 2.5 Pro had the highest recall (0.762) and accuracy (0.882). Hallucination rates ranged from 11% to 51%. Gemini 2.5 Pro outperformed others across performance criteria and received the highest expert rating (4.8/5). Statistically significant differences in accuracy, hallucination, and subjective scores were observed (p < 0.05). Conclusion LLMs show emerging promise as tools for RCA in radiation oncology. They can generate relevant, accurate analyses aligned with expert judgment and may support incident analysis and quality improvement efforts to enhance patient safety in clinical practice.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.