Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Emergent collective intelligence from massive-agent cooperation and competition (2301.01609v2)

Published 4 Jan 2023 in cs.AI and cs.MA

Abstract: Inspired by organisms evolving through cooperation and competition between different populations on Earth, we study the emergence of artificial collective intelligence through massive-agent reinforcement learning. To this end, We propose a new massive-agent reinforcement learning environment, Lux, where dynamic and massive agents in two teams scramble for limited resources and fight off the darkness. In Lux, we build our agents through the standard reinforcement learning algorithm in curriculum learning phases and leverage centralized control via a pixel-to-pixel policy network. As agents co-evolve through self-play, we observe several stages of intelligence, from the acquisition of atomic skills to the development of group strategies. Since these learned group strategies arise from individual decisions without an explicit coordination mechanism, we claim that artificial collective intelligence emerges from massive-agent cooperation and competition. We further analyze the emergence of various learned strategies through metrics and ablation studies, aiming to provide insights for reinforcement learning implementations in massive-agent environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Hanmo Chen (3 papers)
  2. Stone Tao (10 papers)
  3. Jiaxin Chen (55 papers)
  4. Weihan Shen (2 papers)
  5. Xihui Li (1 paper)
  6. Chenghui Yu (12 papers)
  7. Sikai Cheng (4 papers)
  8. Xiaolong Zhu (18 papers)
  9. Xiu Li (166 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.