Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Policy Gradient with Variance Reduction in Multi-Agent Reinforcement Learning (2111.12961v3)

Published 25 Nov 2021 in cs.MA, cs.LG, and math.OC

Abstract: This paper studies a distributed policy gradient in collaborative multi-agent reinforcement learning (MARL), where agents over a communication network aim to find the optimal policy to maximize the average of all agents' local returns. Due to the non-concave performance function of policy gradient, the existing distributed stochastic optimization methods for convex problems cannot be directly used for policy gradient in MARL. This paper proposes a distributed policy gradient with variance reduction and gradient tracking to address the high variances of policy gradient, and utilizes importance weight to solve the {distribution shift} problem in the sampling process. We then provide an upper bound on the mean-squared stationary gap, which depends on the number of iterations, the mini-batch size, the epoch size, the problem parameters, and the network topology. We further establish the sample and communication complexity to obtain an $\epsilon$-approximate stationary point. Numerical experiments are performed to validate the effectiveness of the proposed algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xiaoxiao Zhao (7 papers)
  2. Jinlong Lei (31 papers)
  3. Li Li (657 papers)
  4. Jie Chen (602 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.