Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Play No-Press Diplomacy with Best Response Policy Iteration (2006.04635v4)

Published 8 Jun 2020 in cs.LG, cs.AI, cs.GT, cs.MA, and stat.ML

Abstract: Recent advances in deep reinforcement learning (RL) have led to considerable progress in many 2-player zero-sum games, such as Go, Poker and Starcraft. The purely adversarial nature of such games allows for conceptually simple and principled application of RL methods. However real-world settings are many-agent, and agent interactions are complex mixtures of common-interest and competitive aspects. We consider Diplomacy, a 7-player board game designed to accentuate dilemmas resulting from many-agent interactions. It also features a large combinatorial action space and simultaneous moves, which are challenging for RL algorithms. We propose a simple yet effective approximate best response operator, designed to handle large combinatorial action spaces and simultaneous moves. We also introduce a family of policy iteration methods that approximate fictitious play. With these methods, we successfully apply RL to Diplomacy: we show that our agents convincingly outperform the previous state-of-the-art, and game theoretic equilibrium analysis shows that the new process yields consistent improvements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Thomas Anthony (16 papers)
  2. Tom Eccles (18 papers)
  3. Andrea Tacchetti (26 papers)
  4. János Kramár (19 papers)
  5. Ian Gemp (36 papers)
  6. Thomas C. Hudson (1 paper)
  7. Nicolas Porcel (3 papers)
  8. Marc Lanctot (60 papers)
  9. Richard Everett (15 papers)
  10. Roman Werpachowski (4 papers)
  11. Satinder Singh (80 papers)
  12. Thore Graepel (48 papers)
  13. Yoram Bachrach (43 papers)
  14. Julien Pérolat (10 papers)
Citations (40)