Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalability Bottlenecks in Multi-Agent Reinforcement Learning Systems (2302.05007v1)

Published 10 Feb 2023 in cs.MA

Abstract: Multi-Agent Reinforcement Learning (MARL) is a promising area of research that can model and control multiple, autonomous decision-making agents. During online training, MARL algorithms involve performance-intensive computations such as exploration and exploitation phases originating from large observation-action space belonging to multiple agents. In this article, we seek to characterize the scalability bottlenecks in several popular classes of MARL algorithms during their training phases. Our experimental results reveal new insights into the key modules of MARL algorithms that limit the scalability, and outline potential strategies that may help address these performance issues.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kailash Gogineni (8 papers)
  2. Peng Wei (112 papers)
  3. Tian Lan (162 papers)
  4. Guru Venkataramani (18 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.