Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Multi-Agent Communication-Based Language Learning (1605.07133v1)

Published 23 May 2016 in cs.CL, cs.CV, and cs.LG

Abstract: We propose an interactive multimodal framework for language learning. Instead of being passively exposed to large amounts of natural text, our learners (implemented as feed-forward neural networks) engage in cooperative referential games starting from a tabula rasa setup, and thus develop their own language from the need to communicate in order to succeed at the game. Preliminary experiments provide promising results, but also suggest that it is important to ensure that agents trained in this way do not develop an adhoc communication code only effective for the game they are playing

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Angeliki Lazaridou (34 papers)
  2. Nghia The Pham (6 papers)
  3. Marco Baroni (58 papers)
Citations (24)