Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Black-Box Safety Validation of Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach (2203.03451v3)

Published 7 Mar 2022 in eess.SY and cs.SY

Abstract: The increasing use of autonomous and semi-autonomous agents in society has made it crucial to validate their safety. However, the complex scenarios in which they are used may make formal verification impossible. To address this challenge, simulation-based safety validation is employed to test the complex system. Recent approaches using reinforcement learning are prone to excessive exploitation of known failures and a lack of coverage in the space of failures. To address this limitation, a type of Markov decision process called the "knowledge MDP" has been defined. This approach takes into account both the learned model and its metadata, such as sample counts, in estimating the system's knowledge through the "knows what it knows" framework. A novel algorithm that extends bidirectional learning to multiple fidelities of simulators has been developed to solve the safety validation problem. The effectiveness of this approach is demonstrated through a case study in which an adversary is trained to intercept a test model in a grid-world environment. Monte Carlo trials compare the sample efficiency of the proposed algorithm to learning with a single-fidelity simulator and show the importance of incorporating knowledge about learned models into the decision-making process.

Citations (2)

Summary

We haven't generated a summary for this paper yet.