Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discrete Messages Improve Communication Efficiency among Isolated Intelligent Agents (2312.15985v3)

Published 26 Dec 2023 in cs.LG, cs.IT, and math.IT

Abstract: Individuals, despite having varied life experiences and learning processes, can communicate effectively through languages. This study aims to explore the efficiency of language as a communication medium. We put forth two specific hypotheses: First, discrete messages are more effective than continuous ones when agents have diverse personal experiences. Second, communications using multiple discrete tokens are more advantageous than those using a single token. To valdate these hypotheses, we designed multi-agent machine learning experiments to assess communication efficiency using various information transmission methods between speakers and listeners. Our empirical findings indicate that, in scenarios where agents are exposed to different data, communicating through sentences composed of discrete tokens offers the best inter-agent communication efficiency. The limitations of our finding include lack of systematic advantages over other more sophisticated encoder-decoder model such as variational autoencoder and lack of evluation on non-image dataset, which we will leave for future studies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147–169, 1985.
  2. Neural production systems. Advances in Neural Information Processing Systems, 34:25673–25687, 2021.
  3. How agents see things: On visual representations in an emergent language game. arXiv preprint arXiv:1808.10696, 2018.
  4. Anti-efficient encoding in emergent communication. Advances in Neural Information Processing Systems, 32, 2019.
  5. Communicating artificial neural networks develop efficient color-naming systems. Proceedings of the National Academy of Sciences, 118(12):e2016569118, 2021.
  6. Emergent communication at scale. In International conference on learning representations, 2021.
  7. Learning to communicate with deep multi-agent reinforcement learning. Advances in neural information processing systems, 29, 2016.
  8. Disentangling categorization in multi-agent emergent communication. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4523–4540, 2022.
  9. Vector quantization and signal compression. In The Kluwer International Series in Engineering and Computer Science, 1991.
  10. Coordination among neural modules through a shared global workspace. arXiv preprint arXiv:2103.01197, 2021.
  11. Recurrent independent mechanisms. arXiv preprint arXiv:1909.10893, 2019.
  12. The emergence of compositional languages for numeric concepts through iterated learning in neural agents. arXiv preprint arXiv:1910.05291, 2019.
  13. Emergence of language with multi-agent games: Learning to communicate with sequences of symbols. Advances in neural information processing systems, 30, 2017.
  14. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  15. Entropy minimization in emergent languages. In International Conference on Machine Learning, pages 5220–5230. PMLR, 2020.
  16. Simon Kirby. Natural language from artificial life. Artificial life, 8(2):185–215, 2002.
  17. The emergence of number and syntax units in lstm language models. arXiv preprint arXiv:1903.07435, 2019.
  18. Transformers with competitive ensembles of independent mechanisms. arXiv preprint arXiv:2103.00336, 2021.
  19. Multi-agent cooperation and the emergence of (natural) language. arXiv preprint arXiv:1612.07182, 2016.
  20. David Lewis. Convention harvard university press. Cambridge, MA, 1969.
  21. Learning emergent discrete message communication for cooperative reinforcement learning. In 2022 International Conference on Robotics and Automation (ICRA), pages 5511–5517. IEEE, 2022.
  22. Discrete-valued neural communication. Advances in Neural Information Processing Systems, 34:2109–2121, 2021.
  23. On the interaction between supervision and self-play in emergent communication. arXiv preprint arXiv:2002.01093, 2020.
  24. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
  25. Targeted multi-agent communication with deep metric learning. Engineering Letters, 31(2), 2023.
  26. Multi-agent graph-attention communication and teaming. In AAMAS, pages 964–973, 2021.
  27. Emanuele Pesce and G. Montana. Improving coordination in small-scale multi-agent deep reinforcement learning through memory-driven communication. Machine Learning, 109:1727 – 1747, 2019.
  28. Capacity, bandwidth, and compositionality in emergent language learning. arXiv preprint arXiv:1910.11424, 2019.
  29. Jason Tyler Rolfe. Discrete variational autoencoders. arXiv preprint arXiv:1609.02200, 2016.
  30. Multi-agent actor centralized-critic with communication. Neurocomputing, 390:40–56, 2020.
  31. Luc Steels. The synthetic modeling of language origins. Evolution of communication, 1(1):1–34, 1997.
  32. Shaping representations through communication: community size effect in artificial learning systems. arXiv preprint arXiv:1912.06208, 2019.
  33. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017.
  34. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  35. Progress in the simulation of emergent communication and language. Adaptive Behavior, 11(1):37–69, 2003.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hang Chen (77 papers)
  2. Yuchuan Jang (1 paper)
  3. Weijie Zhou (6 papers)
  4. Ziwei Chen (12 papers)
  5. Dianbo Liu (59 papers)
  6. Cristian Meo (13 papers)