Learning Structured Communication for Multi-agent Reinforcement Learning (2002.04235v1)
Abstract: This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting. We summarize the general categories of topology for communication structures in MARL literature, which are often manually specified. Then we propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology. Our framework allows for adaptive agent grouping to form different hierarchical formations over episodes, which is generated by an auxiliary task combined with a hierarchical routing protocol. Given each formed topology, a hierarchical graph neural network is learned to enable effective message information generation and propagation among inter- and intra-group communications. In contrast to existing communication mechanisms, our method has an explicit while learnable design for hierarchical communication. Experiments on challenging tasks show the proposed LSC enjoys high communication efficiency, scalability, and global cooperation capability.
- Junjie Sheng (16 papers)
- Xiangfeng Wang (70 papers)
- Bo Jin (57 papers)
- Junchi Yan (241 papers)
- Wenhao Li (136 papers)
- Tsung-Hui Chang (86 papers)
- Jun Wang (991 papers)
- Hongyuan Zha (136 papers)