Papers
Topics
Authors
Recent
2000 character limit reached

Primal-Dual Distributed Temporal Difference Learning

Published 21 May 2018 in math.OC | (1805.07918v6)

Abstract: The goal of this paper is to study a distributed version of the gradient temporal-difference (GTD) learning algorithm for a class of multi-agent Markov decision processes (MDPs). The temporal-difference (TD) learning is a reinforcement learning (RL) algorithm that learns an infinite horizon discounted cost function (or value function) for a given fixed policy without the model knowledge. In the multi-agent MDP each agent receives a local reward through a local processing. The agents communicate over sparse and random networks to learn the global value function corresponding to the aggregate of local rewards. In this paper, the problem of estimating the global value function is converted into a constrained convex optimization problem. Then, we propose a stochastic primal-dual distributed algorithm to solve it and prove that the algorithm converges to a set of solutions of the optimization problem.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.