Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Multi-Agent Off-Policy Actor-Critic Algorithm for Distributed Reinforcement Learning (1903.06372v3)

Published 15 Mar 2019 in cs.LG and stat.ML

Abstract: This paper extends off-policy reinforcement learning to the multi-agent case in which a set of networked agents communicating with their neighbors according to a time-varying graph collaboratively evaluates and improves a target policy while following a distinct behavior policy. To this end, the paper develops a multi-agent version of emphatic temporal difference learning for off-policy policy evaluation, and proves convergence under linear function approximation. The paper then leverages this result, in conjunction with a novel multi-agent off-policy policy gradient theorem and recent work in both multi-agent on-policy and single-agent off-policy actor-critic methods, to develop and give convergence guarantees for a new multi-agent off-policy actor-critic algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wesley Suttle (2 papers)
  2. Zhuoran Yang (155 papers)
  3. Kaiqing Zhang (70 papers)
  4. Zhaoran Wang (164 papers)
  5. Ji Liu (285 papers)
  6. Tamer Basar (77 papers)
Citations (62)

Summary

We haven't generated a summary for this paper yet.