Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable Centralized Deep Multi-Agent Reinforcement Learning via Policy Gradients (1805.08776v1)

Published 22 May 2018 in cs.LG, cs.AI, cs.MA, and stat.ML

Abstract: In this paper, we explore using deep reinforcement learning for problems with multiple agents. Most existing methods for deep multi-agent reinforcement learning consider only a small number of agents. When the number of agents increases, the dimensionality of the input and control spaces increase as well, and these methods do not scale well. To address this, we propose casting the multi-agent reinforcement learning problem as a distributed optimization problem. Our algorithm assumes that for multi-agent settings, policies of individual agents in a given population live close to each other in parameter space and can be approximated by a single policy. With this simple assumption, we show our algorithm to be extremely effective for reinforcement learning in multi-agent settings. We demonstrate its effectiveness against existing comparable approaches on co-operative and competitive tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Arbaaz Khan (12 papers)
  2. Clark Zhang (8 papers)
  3. Daniel D. Lee (44 papers)
  4. Vijay Kumar (191 papers)
  5. Alejandro Ribeiro (281 papers)
Citations (30)

Summary

We haven't generated a summary for this paper yet.