Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GenAINet: Enabling Wireless Collective Intelligence via Knowledge Transfer and Reasoning (2402.16631v2)

Published 26 Feb 2024 in cs.AI, cs.NI, and eess.SP

Abstract: Generative artificial intelligence (GenAI) and communication networks are expected to have groundbreaking synergies in 6G. Connecting GenAI agents over a wireless network can potentially unleash the power of collective intelligence and pave the way for artificial general intelligence (AGI). However, current wireless networks are designed as a "data pipe" and are not suited to accommodate and leverage the power of GenAI. In this paper, we propose the GenAINet framework in which distributed GenAI agents communicate knowledge (high-level concepts or abstracts) to accomplish arbitrary tasks. We first provide a network architecture integrating GenAI capabilities to manage both network protocols and applications. Building on this, we investigate effective communication and reasoning problems by proposing a semantic-native GenAINet. Specifically, GenAI agents extract semantic concepts from multi-modal raw data, build a knowledgebase representing their semantic relations, which is retrieved by GenAI models for planning and reasoning. Under this paradigm, an agent can learn fast from other agents' experience for making better decisions with efficient communications. Furthermore, we conduct two case studies where in wireless device query, we show that extracting and transferring knowledge can improve query accuracy with reduced communication; and in wireless power control, we show that distributed agents can improve decisions via collaborative reasoning. Finally, we address that developing a hierarchical semantic level Telecom world model is a key path towards network of collective intelligence.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. G. Yenduri et al., “Generative pre-trained transformer: A comprehensive review on enabling technologies, potential applications, emerging challenges, and future directions,” arXiv preprint arXiv:2305.10435, 2023.
  2. L. Bariah et al., “Large generative ai models for telecom: The next big thing?” IEEE Communications Magazine, 2024.
  3. C. Hsieh et al., “Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes,” in Findings of the Association for Computational Linguistics, 2023.
  4. W. Kwon et al., “Efficient memory management for large language model serving with paged attention,” in 29th Symposium on Operating Systems Principles, 2023.
  5. T. Dettmers, A. Pagnoni et al., “QLoRA: Efficient finetuning of quantized LLMs,” arXiv preprint arXiv:2305.14314, 2023.
  6. L. Wang et al., “A survey on large language model based autonomous agents,” arXiv preprint arXiv:2308.11432, 2023.
  7. J. Li, Q. Zhang, Y. Yu, Q. Fu, and D. Ye, “More agents is all you need,” arXiv preprint arXiv:2402.05120, 2024.
  8. G. Li, H. Hammoud, H. Itani, D. Khizbullin, and B. Ghanem, “CAMEL: Communicative agents for “mind” exploration of large scale language model society,” arXiv preprint arXiv:2303.17760, 2023.
  9. J. S. Park et al., “Generative agents: Interactive simulacra of human behavior,” in 36th Annual ACM Symposium on User Interface Software and Technology, 2023.
  10. R. Girdhar, A. El-Nouby et al., “ImageBind one embedding space to bind them all,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
  11. J. Wei, X. Wang et al., “Chain-of-thought prompting elicits reasoning in large language models,” in Annual Conference on Neural Information Processing Systems, 2022.
  12. H. Gilbert, M. Sandborn et al., “Semantic compression with large language models,” in International Conference on Social Networks Analysis, Management and Security, 2023.
  13. A. Maatouk, F. Ayed et al., “TeleQnA: A benchmark dataset to assess large language models telecommunications knowledge,” arXiv preprint arXiv:2310.15051, 2023.
  14. A. Dawid and Y. LeCun, “Introduction to latent variable energy-based models: A path towards autonomous machine intelligence,” arXiv preprint arXiv:2306.02572, 2023.
  15. M. Assran, Q. Duval et al., “Self-supervised learning from images with a joint-embedding predictive architecture,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hang Zou (23 papers)
  2. Qiyang Zhao (15 papers)
  3. Lina Bariah (26 papers)
  4. Yu Tian (249 papers)
  5. Mehdi Bennis (333 papers)
  6. Samson Lasaulce (81 papers)
  7. Faouzi Bader (23 papers)
  8. Merouane Debbah (269 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com