Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Emergent Discrete Message Communication for Cooperative Reinforcement Learning (2102.12550v1)

Published 24 Feb 2021 in cs.LG, cs.AI, and cs.MA

Abstract: Communication is a important factor that enables agents work cooperatively in multi-agent reinforcement learning (MARL). Most previous work uses continuous message communication whose high representational capacity comes at the expense of interpretability. Allowing agents to learn their own discrete message communication protocol emerged from a variety of domains can increase the interpretability for human designers and other agents.This paper proposes a method to generate discrete messages analogous to human languages, and achieve communication by a broadcast-and-listen mechanism based on self-attention. We show that discrete message communication has performance comparable to continuous message communication but with much a much smaller vocabulary size.Furthermore, we propose an approach that allows humans to interactively send discrete messages to agents.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sheng Li (217 papers)
  2. Yutai Zhou (5 papers)
  3. Ross Allen (6 papers)
  4. Mykel J. Kochenderfer (215 papers)
Citations (13)