Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-agent Communication meets Natural Language: Synergies between Functional and Structural Language Learning (2005.07064v1)

Published 14 May 2020 in cs.CL, cs.AI, and cs.LG

Abstract: We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning, with an end goal of teaching agents to communicate with humans in natural language. Our starting point is a LLM that has been trained on generic, not task-specific language data. We then place this model in a multi-agent self-play environment that generates task-specific rewards used to adapt or modulate the model, turning it into a task-conditional LLM. We introduce a new way for combining the two types of learning based on the idea of reranking LLM samples, and show that this method outperforms others in communicating with humans in a visual referential communication task. Finally, we present a taxonomy of different types of language drift that can occur alongside a set of measures to detect them.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Angeliki Lazaridou (34 papers)
  2. Anna Potapenko (4 papers)
  3. Olivier Tieleman (10 papers)
Citations (90)