Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Counterexample-Guided Strategy Improvement for POMDPs Using Recurrent Neural Networks (1903.08428v2)

Published 20 Mar 2019 in cs.AI and cs.LG

Abstract: We study strategy synthesis for partially observable Markov decision processes (POMDPs). The particular problem is to determine strategies that provably adhere to (probabilistic) temporal logic constraints. This problem is computationally intractable and theoretically hard. We propose a novel method that combines techniques from machine learning and formal verification. First, we train a recurrent neural network (RNN) to encode POMDP strategies. The RNN accounts for memory-based decisions without the need to expand the full belief space of a POMDP. Secondly, we restrict the RNN-based strategy to represent a finite-memory strategy and implement it on a specific POMDP. For the resulting finite Markov chain, efficient formal verification techniques provide provable guarantees against temporal logic specifications. If the specification is not satisfied, counterexamples supply diagnostic information. We use this information to improve the strategy by iteratively training the RNN. Numerical experiments show that the proposed method elevates the state of the art in POMDP solving by up to three orders of magnitude in terms of solving times and model sizes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Steven Carr (13 papers)
  2. Nils Jansen (73 papers)
  3. Ralf Wimmer (12 papers)
  4. Alexandru C. Serban (2 papers)
  5. Bernd Becker (11 papers)
  6. Ufuk Topcu (288 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.