Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Wireless Multi-Agent Generative AI: From Connected Intelligence to Collective Intelligence (2307.02757v1)

Published 6 Jul 2023 in cs.MA

Abstract: The convergence of generative LLMs, edge networks, and multi-agent systems represents a groundbreaking synergy that holds immense promise for future wireless generations, harnessing the power of collective intelligence and paving the way for self-governed networks where intelligent decision-making happens right at the edge. This article puts the stepping-stone for incorporating multi-agent generative AI in wireless networks, and sets the scene for realizing on-device LLMs, where multi-agent LLMs are collaboratively planning and solving tasks to achieve a number of network goals. We further investigate the profound limitations of cloud-based LLMs, and explore multi-agent LLMs from a game theoretic perspective, where agents collaboratively solve tasks in competitive environments. Moreover, we establish the underpinnings for the architecture design of wireless multi-agent generative AI systems at the network level and the agent level, and we identify the wireless technologies that are envisioned to play a key role in enabling on-device LLM. To demonstrate the promising potentials of wireless multi-agent generative AI networks, we highlight the benefits that can be achieved when implementing wireless generative agents in intent-based networking, and we provide a case study to showcase how on-device LLMs can contribute to solving network intents in a collaborative fashion. We finally shed lights on potential challenges and sketch a research roadmap towards realizing the vision of wireless collective intelligence.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. G. Yenduri et al., “Generative pre-trained transformer: A comprehensive review on enabling technologies, potential applications, emerging challenges, and future directions,” 2023.
  2. Falcon LLM. https://falconllm.tii.ae.
  3. H. Touvron et al., “LLaMA: Open and efficient foundation language models,” 2023.
  4. R. Anil et al., “PaLM 2 technical report,” 2023.
  5. S. Bubeck et al., “Sparks of artificial general intelligence: Early experiments with GPT-4,” 2023.
  6. A. Vaswani et al., “Attention is all you need,” 2017.
  7. C. Chaccour et al., “Less data, more knowledge: Building next generation semantic communication networks,” 2022.
  8. E. Akata et al., “Playing repeated games with large language models,” 2023.
  9. Y. Nakajima, “Task-driven autonomous agent utilizing GPT-4, Pinecone, and LangChain for diverse applications,” 2023.
  10. Auto-GPT: An autonomous GPT-4 experiment. https://github.com/Significant-Gravitas/Auto-GPT.
  11. Y. Shen et al., “Hugginggpt: Solving AI tasks with ChatGPT and its friends in HuggingFace,” 2023.
  12. G. Li et al., “CAMEL: Communicative agents for ”mind” exploration of large scale language model society,” 2023.
  13. J. S. Park et al., “Generative agents: Interactive simulacra of human behavior,” 2023.
  14. A. Ramesh et al., “Zero-shot text-to-image generation,” 2021.
  15. A. Ramesh, P. Dhariwal et al., “Hierarchical text-conditional image generation with clip latents,” 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hang Zou (23 papers)
  2. Qiyang Zhao (15 papers)
  3. Lina Bariah (26 papers)
  4. Mehdi Bennis (333 papers)
  5. Merouane Debbah (269 papers)
Citations (31)
X Twitter Logo Streamline Icon: https://streamlinehq.com