Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes (1710.09988v3)

Published 27 Oct 2017 in cs.DS, cs.LG, and math.OC

Abstract: In this paper we provide faster algorithms for approximately solving discounted Markov Decision Processes in multiple parameter regimes. Given a discounted Markov Decision Process (DMDP) with $|S|$ states, $|A|$ actions, discount factor $\gamma\in(0,1)$, and rewards in the range $[-M, M]$, we show how to compute an $\epsilon$-optimal policy, with probability $1 - \delta$ in time [ \tilde{O}\left( \left(|S|2 |A| + \frac{|S| |A|}{(1 - \gamma)3} \right) \log\left( \frac{M}{\epsilon} \right) \log\left( \frac{1}{\delta} \right) \right) ~ . ] This contribution reflects the first nearly linear time, nearly linearly convergent algorithm for solving DMDPs for intermediate values of $\gamma$. We also show how to obtain improved sublinear time algorithms provided we can sample from the transition function in $O(1)$ time. Under this assumption we provide an algorithm which computes an $\epsilon$-optimal policy with probability $1 - \delta$ in time [ \tilde{O} \left(\frac{|S| |A| M2}{(1 - \gamma)4 \epsilon2} \log \left(\frac{1}{\delta}\right) \right) ~. ] Lastly, we extend both these algorithms to solve finite horizon MDPs. Our algorithms improve upon the previous best for approximately computing optimal policies for fixed-horizon MDPs in multiple parameter regimes. Interestingly, we obtain our results by a careful modification of approximate value iteration. We show how to combine classic approximate value iteration analysis with new techniques in variance reduction. Our fastest algorithms leverage further insights to ensure that our algorithms make monotonic progress towards the optimal value. This paper is one of few instances in using sampling to obtain a linearly convergent linear programming algorithm and we hope that the analysis may be useful more broadly.

Citations (120)

Summary

We haven't generated a summary for this paper yet.