Probabilistic Model Checking of Stochastic Reinforcement Learning Policies (2403.18725v1)
Abstract: We introduce a method to verify stochastic reinforcement learning (RL) policies. This approach is compatible with any RL algorithm as long as the algorithm and its corresponding environment collectively adhere to the Markov property. In this setting, the future state of the environment should depend solely on its current state and the action executed, independent of any previous states or actions. Our method integrates a verification technique, referred to as model checking, with RL, leveraging a Markov decision process, a trained RL policy, and a probabilistic computation tree logic (PCTL) formula to build a formal model that can be subsequently verified via the model checker Storm. We demonstrate our method's applicability across multiple benchmarks, comparing it to baseline methods called deterministic safety estimates and naive monolithic model checking. Our results show that our method is suited to verify stochastic RL policies.
- Safe reinforcement learning via shielding. In AAAI, pages 2669–2678. AAAI Press.
- Verifying reinforcement learning up to infinity. In IJCAI, pages 2154–2160. ijcai.org.
- Probabilistic guarantees for safe deep reinforcement learning. In FORMATS, volume 12288 of Lecture Notes in Computer Science, pages 231–248. Springer.
- Verified probabilistic policies for deep reinforcement learning. In NFM, volume 13260 of Lecture Notes in Computer Science, pages 193–212. Springer.
- Principles of model checking. MIT Press.
- Barhate, N. (2021). Minimal pytorch implementation of proximal policy optimization. https://github.com/nikhilbarhate99/PPO-PyTorch.
- Verification of markov decision processes using learning algorithms. In ATVA, volume 8837 of Lecture Notes in Computer Science, pages 98–114. Springer.
- Safe reinforcement learning via shielding under partial observability. In AAAI, pages 14748–14756. AAAI Press.
- Efficient on-the-fly algorithms for the analysis of timed games. In CONCUR, volume 3653 of Lecture Notes in Computer Science, pages 66–80. Springer.
- Formal verification of neural networks for safety-critical tasks in deep reinforcement learning. In de Campos, C. and Maathuis, M. H., editors, Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, volume 161 of Proceedings of Machine Learning Research, pages 333–343. PMLR.
- Verifying temporal properties of finite-state probabilistic programs. In FOCS, pages 338–345. IEEE Computer Society.
- The complexity of probabilistic verification. J. ACM, 42(4):857–907.
- Uppaal stratego. In TACAS, volume 9035 of Lecture Notes in Computer Science, pages 206–211. Springer.
- Permissive controller synthesis for probabilistic systems. Log. Methods Comput. Sci., 11(2).
- Verifying learning-augmented systems. In SIGCOMM, pages 305–318. ACM.
- A comprehensive survey on safe reinforcement learning. J. Mach. Learn. Res., 16:1437–1480.
- COOL-MC: A comprehensive tool for reinforcement learning and model checking. In SETTA, volume 13649 of Lecture Notes in Computer Science, pages 41–49. Springer.
- Model checking for adversarial multi-agent reinforcement learning with reactive defense methods. In ICAPS, pages 162–170. AAAI Press.
- Model checking for adversarial multi-agent reinforcement learning with reactive defense methods. In Proceedings of the International Conference on Automated Planning and Scheduling, volume 33, pages 162–170.
- Omega-regular objectives in model-free reinforcement learning. In TACAS (1), volume 11427 of Lecture Notes in Computer Science, pages 395–412. Springer.
- A logic for reasoning about time and reliability. Formal Aspects Comput., 6(5):512–535.
- Deep reinforcement learning with temporal logics. In FORMATS, volume 12288 of Lecture Notes in Computer Science, pages 1–22. Springer.
- The probabilistic model checker Storm. Int. J. Softw. Tools Technol. Transf., 24(4):589–610.
- Efficient LTL model checking of deep reinforcement learning systems using policy extraction. In SEKE, pages 357–362. KSI Research Inc.
- Verifying deep-rl-driven systems. In NetAI@SIGCOMM, pages 83–89. ACM.
- PRISM 4.0: Verification of probabilistic real-time systems. In CAV, volume 6806 of LNCS, pages 585–591. Springer.
- Environment-independent task specifications via GLTL. CoRR, abs/1704.04341.
- Playing atari with deep reinforcement learning. CoRR, abs/1312.5602.
- Human-level control through deep reinforcement learning. Nat., 518(7540):529–533.
- Mastering the game of go with deep neural networks and tree search. Nat., 529(7587):484–489.
- Reinforcement learning: An introduction. MIT press.
- Scalar reward is not enough: a response to silver, singh, precup and sutton (2021). Auton. Agents Multi Agent Syst., 36(2):41.
- Grandmaster level in starcraft II using multi-agent reinforcement learning. Nat., 575(7782):350–354.
- Vouros, G. A. (2023). Explainable deep reinforcement learning: State of the art and challenges. ACM Comput. Surv., 55(5):92:1–92:39.
- Statistically model checking PCTL specifications on markov decision processes via reinforcement learning. In CDC, pages 1392–1397. IEEE.
- Interpretable reinforcement learning of behavior trees. In ICMLC, pages 492–499. ACM.
- An inductive synthesis framework for verifiable reinforcement learning. In PLDI, pages 686–701. ACM.