Papers
Topics
Authors
Recent
Search
2000 character limit reached

Predictor-Corrector(PC) Temporal Difference(TD) Learning (PCTD)

Published 15 Apr 2021 in cs.LG and cs.AI | (2104.09620v1)

Abstract: Using insight from numerical approximation of ODEs and the problem formulation and solution methodology of TD learning through a Galerkin relaxation, I propose a new class of TD learning algorithms. After applying the improved numerical methods, the parameter being approximated has a guaranteed order of magnitude reduction in the Taylor Series error of the solution to the ODE for the parameter $\theta(t)$ that is used in constructing the linearly parameterized value function. Predictor-Corrector Temporal Difference (PCTD) is what I call the translated discrete time Reinforcement Learning(RL) algorithm from the continuous time ODE using the theory of Stochastic Approximation(SA). Both causal and non-causal implementations of the algorithm are provided, and simulation results are listed for an infinite horizon task to compare the original TD(0) algorithm against both versions of PCTD(0).

Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.