Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ColosseumRL: A Framework for Multiagent Reinforcement Learning in $N$-Player Games (1912.04451v1)

Published 10 Dec 2019 in cs.MA

Abstract: Much of recent success in multiagent reinforcement learning has been in two-player zero-sum games. In these games, algorithms such as fictitious self-play and minimax tree search can converge to an approximate Nash equilibrium. While playing a Nash equilibrium strategy in a two-player zero-sum game is optimal, in an $n$-player general sum game, it becomes a much less informative solution concept. Despite the lack of a satisfying solution concept, $n$-player games form the vast majority of real-world multiagent situations. In this paper we present a new framework for research in reinforcement learning in $n$-player games. We hope that by analyzing behavior learned by agents in these environments the community can better understand this important research area and move toward meaningful solution concepts and research directions. The implementation and additional information about this framework can be found at https://colosseumrl.igb.uci.edu/.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Alexander Shmakov (19 papers)
  2. John Lanier (5 papers)
  3. Stephen McAleer (41 papers)
  4. Rohan Achar (5 papers)
  5. Cristina Lopes (7 papers)
  6. Pierre Baldi (89 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.