Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Focus on What's Informative and Ignore What's not: Communication Strategies in a Referential Game (1911.01892v1)

Published 5 Nov 2019 in cs.CL and cs.AI

Abstract: Research in multi-agent cooperation has shown that artificial agents are able to learn to play a simple referential game while developing a shared lexicon. This lexicon is not easy to analyze, as it does not show many properties of a natural language. In a simple referential game with two neural network-based agents, we analyze the object-symbol mapping trying to understand what kind of strategy was used to develop the emergent language. We see that, when the environment is uniformly distributed, the agents rely on a random subset of features to describe the objects. When we modify the objects making one feature non-uniformly distributed,the agents realize it is less informative and start to ignore it, and, surprisingly, they make a better use of the remaining features. This interesting result suggests that more natural, less uniformly distributed environments might aid in spurring the emergence of better-behaved languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Roberto Dessì (12 papers)
  2. Diane Bouchacourt (32 papers)
  3. Davide Crepaldi (1 paper)
  4. Marco Baroni (58 papers)
Citations (6)