A Multi-Agent Off-Policy Actor-Critic Algorithm for Distributed Reinforcement Learning (1903.06372v3)
Abstract: This paper extends off-policy reinforcement learning to the multi-agent case in which a set of networked agents communicating with their neighbors according to a time-varying graph collaboratively evaluates and improves a target policy while following a distinct behavior policy. To this end, the paper develops a multi-agent version of emphatic temporal difference learning for off-policy policy evaluation, and proves convergence under linear function approximation. The paper then leverages this result, in conjunction with a novel multi-agent off-policy policy gradient theorem and recent work in both multi-agent on-policy and single-agent off-policy actor-critic methods, to develop and give convergence guarantees for a new multi-agent off-policy actor-critic algorithm.
- Wesley Suttle (2 papers)
- Zhuoran Yang (155 papers)
- Kaiqing Zhang (70 papers)
- Zhaoran Wang (164 papers)
- Ji Liu (285 papers)
- Tamer Basar (77 papers)