Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Properties of the Least Squares Temporal Difference learning algorithm (1301.5220v2)

Published 22 Jan 2013 in stat.ML and cs.LG

Abstract: This paper presents four different ways of looking at the well-known Least Squares Temporal Differences (LSTD) algorithm for computing the value function of a Markov Reward Process, each of them leading to different insights: the operator-theory approach via the Galerkin method, the statistical approach via instrumental variables, the linear dynamical system view as well as the limit of the TD iteration. We also give a geometric view of the algorithm as an oblique projection. Furthermore, there is an extensive comparison of the optimization problem solved by LSTD as compared to BeLLMan Residual Minimization (BRM). We then review several schemes for the regularization of the LSTD solution. We then proceed to treat the modification of LSTD for the case of episodic Markov Reward Processes.

Citations (1)

Summary

We haven't generated a summary for this paper yet.