Papers
Topics
Authors
Recent
2000 character limit reached

Black-Box Safety Validation of Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach

Published 7 Mar 2022 in eess.SY and cs.SY | (2203.03451v3)

Abstract: The increasing use of autonomous and semi-autonomous agents in society has made it crucial to validate their safety. However, the complex scenarios in which they are used may make formal verification impossible. To address this challenge, simulation-based safety validation is employed to test the complex system. Recent approaches using reinforcement learning are prone to excessive exploitation of known failures and a lack of coverage in the space of failures. To address this limitation, a type of Markov decision process called the "knowledge MDP" has been defined. This approach takes into account both the learned model and its metadata, such as sample counts, in estimating the system's knowledge through the "knows what it knows" framework. A novel algorithm that extends bidirectional learning to multiple fidelities of simulators has been developed to solve the safety validation problem. The effectiveness of this approach is demonstrated through a case study in which an adversary is trained to intercept a test model in a grid-world environment. Monte Carlo trials compare the sample efficiency of the proposed algorithm to learning with a single-fidelity simulator and show the importance of incorporating knowledge about learned models into the decision-making process.

Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.