Predictor-Corrector(PC) Temporal Difference(TD) Learning (PCTD) (2104.09620v1)
Abstract: Using insight from numerical approximation of ODEs and the problem formulation and solution methodology of TD learning through a Galerkin relaxation, I propose a new class of TD learning algorithms. After applying the improved numerical methods, the parameter being approximated has a guaranteed order of magnitude reduction in the Taylor Series error of the solution to the ODE for the parameter $\theta(t)$ that is used in constructing the linearly parameterized value function. Predictor-Corrector Temporal Difference (PCTD) is what I call the translated discrete time Reinforcement Learning(RL) algorithm from the continuous time ODE using the theory of Stochastic Approximation(SA). Both causal and non-causal implementations of the algorithm are provided, and simulation results are listed for an infinite horizon task to compare the original TD(0) algorithm against both versions of PCTD(0).