Dice Question Streamline Icon: https://streamlinehq.com

Admissible false-positive rate for AI-assisted PRA model review

Determine the admissible false positive error detection rate when using a specifically trained generative AI to review Probabilistic Risk Assessment (PRA) models, such that the advantages of AI assistance outweigh the costs associated with investigating spurious error flags.

Information Square Streamline Icon: https://streamlinehq.com

Background

While AI tools may help reviewers identify inconsistencies in PRA models, excessive false positives can impose review burdens and undermine confidence in the process. The paper highlights the need to set acceptable false-positive thresholds to enable practical and reliable adoption.

This unresolved question is central to quantifying trade-offs in AI-supported PRA workflows, including reviewer effort, model quality assurance, and compliance with regulatory expectations.

References

The following questions remains to be answered: what is the minimum error detection rate to leverage the advantages of the technology? What is the admissible false positive error detection rate? How do we judge a model where all AI detected errors were fixed?

Impact of Generative AI (Large Language Models) on the PRA model construction and maintenance, observations (2406.01133 - Rychkov et al., 3 Jun 2024) in Observation 3, Section 3 (A generative AI use case for a fault tree review)