Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Episodic Multi-agent Reinforcement Learning with Curiosity-Driven Exploration (2111.11032v1)

Published 22 Nov 2021 in cs.LG and cs.AI

Abstract: Efficient exploration in deep cooperative multi-agent reinforcement learning (MARL) still remains challenging in complex coordination problems. In this paper, we introduce a novel Episodic Multi-agent reinforcement learning with Curiosity-driven exploration, called EMC. We leverage an insight of popular factorized MARL algorithms that the "induced" individual Q-values, i.e., the individual utility functions used for local execution, are the embeddings of local action-observation histories, and can capture the interaction between agents due to reward backpropagation during centralized training. Therefore, we use prediction errors of individual Q-values as intrinsic rewards for coordinated exploration and utilize episodic memory to exploit explored informative experience to boost policy training. As the dynamics of an agent's individual Q-value function captures the novelty of states and the influence from other agents, our intrinsic reward can induce coordinated exploration to new or promising states. We illustrate the advantages of our method by didactic examples, and demonstrate its significant outperformance over state-of-the-art MARL baselines on challenging tasks in the StarCraft II micromanagement benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Lulu Zheng (2 papers)
  2. Jiarui Chen (10 papers)
  3. Jianhao Wang (16 papers)
  4. Jiamin He (3 papers)
  5. Yujing Hu (28 papers)
  6. Yingfeng Chen (30 papers)
  7. Changjie Fan (79 papers)
  8. Yang Gao (761 papers)
  9. Chongjie Zhang (68 papers)
Citations (71)