Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Approximating Euclidean by Imprecise Markov Decision Processes (2006.14923v1)

Published 26 Jun 2020 in cs.AI

Abstract: Euclidean Markov decision processes are a powerful tool for modeling control problems under uncertainty over continuous domains. Finite state imprecise, Markov decision processes can be used to approximate the behavior of these infinite models. In this paper we address two questions: first, we investigate what kind of approximation guarantees are obtained when the Euclidean process is approximated by finite state approximations induced by increasingly fine partitions of the continuous state space. We show that for cost functions over finite time horizons the approximations become arbitrarily precise. Second, we use imprecise Markov decision process approximations as a tool to analyse and validate cost functions and strategies obtained by reinforcement learning. We find that, on the one hand, our new theoretical results validate basic design choices of a previously proposed reinforcement learning approach. On the other hand, the imprecise Markov decision process approximations reveal some inaccuracies in the learned cost functions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Manfred Jaeger (15 papers)
  2. Giorgio Bacci (16 papers)
  3. Giovanni Bacci (11 papers)
  4. Kim Guldstrand Larsen (18 papers)
  5. Peter Gjøl Jensen (5 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.