Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modeling Strong and Human-Like Gameplay with KL-Regularized Search (2112.07544v2)

Published 14 Dec 2021 in cs.MA, cs.AI, cs.GT, and cs.LG

Abstract: We consider the task of building strong but human-like policies in multi-agent decision-making problems, given examples of human behavior. Imitation learning is effective at predicting human actions but may not match the strength of expert humans, while self-play learning and search techniques (e.g. AlphaZero) lead to strong performance but may produce policies that are difficult for humans to understand and coordinate with. We show in chess and Go that regularizing search based on the KL divergence from an imitation-learned policy results in higher human prediction accuracy and stronger performance than imitation learning alone. We then introduce a novel regret minimization algorithm that is regularized based on the KL divergence from an imitation-learned policy, and show that using this algorithm for search in no-press Diplomacy yields a policy that matches the human prediction accuracy of imitation learning while being substantially stronger.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Athul Paul Jacob (11 papers)
  2. David J. Wu (9 papers)
  3. Gabriele Farina (78 papers)
  4. Adam Lerer (30 papers)
  5. Hengyuan Hu (22 papers)
  6. Anton Bakhtin (16 papers)
  7. Jacob Andreas (116 papers)
  8. Noam Brown (25 papers)
Citations (46)
X Twitter Logo Streamline Icon: https://streamlinehq.com