Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis (2003.07337v1)
Abstract: We address the problem of policy evaluation in discounted Markov decision processes, and provide instance-dependent guarantees on the $\ell_\infty$-error under a generative model. We establish both asymptotic and non-asymptotic versions of local minimax lower bounds for policy evaluation, thereby providing an instance-dependent baseline by which to compare algorithms. Theory-inspired simulations show that the widely-used temporal difference (TD) algorithm is strictly suboptimal when evaluated in a non-asymptotic setting, even when combined with Polyak-Ruppert iterate averaging. We remedy this issue by introducing and analyzing variance-reduced forms of stochastic approximation, showing that they achieve non-asymptotic, instance-dependent optimality up to logarithmic factors.
- Koulik Khamaru (21 papers)
- Ashwin Pananjady (36 papers)
- Feng Ruan (26 papers)
- Martin J. Wainwright (141 papers)
- Michael I. Jordan (438 papers)