Primal-Dual Distributed Temporal Difference Learning
Abstract: The goal of this paper is to study a distributed version of the gradient temporal-difference (GTD) learning algorithm for a class of multi-agent Markov decision processes (MDPs). The temporal-difference (TD) learning is a reinforcement learning (RL) algorithm that learns an infinite horizon discounted cost function (or value function) for a given fixed policy without the model knowledge. In the multi-agent MDP each agent receives a local reward through a local processing. The agents communicate over sparse and random networks to learn the global value function corresponding to the aggregate of local rewards. In this paper, the problem of estimating the global value function is converted into a constrained convex optimization problem. Then, we propose a stochastic primal-dual distributed algorithm to solve it and prove that the algorithm converges to a set of solutions of the optimization problem.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.