Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Weighted Difference Approximation of Value Functions for Slow-Discounting Markov Decision Processes (1412.4908v1)

Published 16 Dec 2014 in math.OC

Abstract: Processes (MDPs) often require frequent decision making, that is, taking an action every microsecond, second, or minute. Infinite horizon discount reward formulation is still relevant for a large portion of these applications, because actual time span of these problems can be months or years, during which discounting factors due to e.g. interest rates are of practical concern. In this paper, we show that, for such MDPs with discount rate $\alpha$ close to $1$, under a common ergodicity assumption, a weighted difference between two successive value function estimates obtained from the classical value iteration (VI) is a better approximation than the value function obtained directly from VI. Rigorous error bounds are established which in turn show that the approximation converges to the actual value function in a rate $(\alpha \beta)k$ with $\beta<1$. This indicates a geometric convergence even if discount factor $\alpha \to 1$. Furthermore, we explicitly link the convergence speed to the system behaviors of the MDP using the notion of $\epsilon-$mixing time and extend our result to Q-functions. Numerical experiments are conducted to demonstrate the convergence properties of the proposed approximation scheme.

Summary

We haven't generated a summary for this paper yet.