Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Maximum Entropy Population-Based Training for Zero-Shot Human-AI Coordination (2112.11701v3)

Published 22 Dec 2021 in cs.AI

Abstract: We study the problem of training a Reinforcement Learning (RL) agent that is collaborative with humans without using any human data. Although such agents can be obtained through self-play training, they can suffer significantly from distributional shift when paired with unencountered partners, such as humans. To mitigate this distributional shift, we propose Maximum Entropy Population-based training (MEP). In MEP, agents in the population are trained with our derived Population Entropy bonus to promote both pairwise diversity between agents and individual diversity of agents themselves, and a common best agent is trained by paring with agents in this diversified population via prioritized sampling. The prioritization is dynamically adjusted based on the training progress. We demonstrate the effectiveness of our method MEP, with comparison to Self-Play PPO (SP), Population-Based Training (PBT), Trajectory Diversity (TrajeDi), and Fictitious Co-Play (FCP) in the Overcooked game environment, with partners being human proxy models and real humans. A supplementary video showing experimental results is available at https://youtu.be/Xh-FKD0AAKE.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Rui Zhao (241 papers)
  2. Jinming Song (1 paper)
  3. Yufeng Yuan (15 papers)
  4. Hu Haifeng (1 paper)
  5. Yang Gao (761 papers)
  6. Yi Wu (171 papers)
  7. Zhongqian Sun (10 papers)
  8. Yang Wei (18 papers)
Citations (46)