Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hierarchical Deep Multiagent Reinforcement Learning with Temporal Abstraction (1809.09332v2)

Published 25 Sep 2018 in cs.LG, cs.AI, and cs.MA

Abstract: Multiagent reinforcement learning (MARL) is commonly considered to suffer from non-stationary environments and exponentially increasing policy space. It would be even more challenging when rewards are sparse and delayed over long trajectories. In this paper, we study hierarchical deep MARL in cooperative multiagent problems with sparse and delayed reward. With temporal abstraction, we decompose the problem into a hierarchy of different time scales and investigate how agents can learn high-level coordination based on the independent skills learned at the low level. Three hierarchical deep MARL architectures are proposed to learn hierarchical policies under different MARL paradigms. Besides, we propose a new experience replay mechanism to alleviate the issue of the sparse transitions at the high level of abstraction and the non-stationarity of multiagent learning. We empirically demonstrate the effectiveness of our approaches in two domains with extremely sparse feedback: (1) a variety of Multiagent Trash Collection tasks, and (2) a challenging online mobile game, i.e., Fever Basketball Defense.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Hongyao Tang (28 papers)
  2. Jianye Hao (185 papers)
  3. Tangjie Lv (35 papers)
  4. Yingfeng Chen (30 papers)
  5. Zongzhang Zhang (33 papers)
  6. Hangtian Jia (4 papers)
  7. Chunxu Ren (2 papers)
  8. Yan Zheng (102 papers)
  9. Zhaopeng Meng (23 papers)
  10. Changjie Fan (79 papers)
  11. Li Wang (470 papers)
Citations (23)