Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Cooperative Multi-Agent Reinforcement Learning with Directed Coordination Graph (2201.04962v1)

Published 10 Jan 2022 in cs.MA, cs.AI, cs.LG, cs.SY, eess.SY, and math.OC

Abstract: Existing distributed cooperative multi-agent reinforcement learning (MARL) frameworks usually assume undirected coordination graphs and communication graphs while estimating a global reward via consensus algorithms for policy evaluation. Such a framework may induce expensive communication costs and exhibit poor scalability due to requirement of global consensus. In this work, we study MARLs with directed coordination graphs, and propose a distributed RL algorithm where the local policy evaluations are based on local value functions. The local value function of each agent is obtained by local communication with its neighbors through a directed learning-induced communication graph, without using any consensus algorithm. A zeroth-order optimization (ZOO) approach based on parameter perturbation is employed to achieve gradient estimation. By comparing with existing ZOO-based RL algorithms, we show that our proposed distributed RL algorithm guarantees high scalability. A distributed resource allocation example is shown to illustrate the effectiveness of our algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Gangshan Jing (15 papers)
  2. He Bai (50 papers)
  3. Jemin George (25 papers)
  4. Aranya Chakrabortty (40 papers)
  5. Piyush. K. Sharma (1 paper)
Citations (5)

Summary

We haven't generated a summary for this paper yet.