Large Language Model Enhanced Multi-Agent Systems for 6G Communications (2312.07850v1)
Abstract: The rapid development of the LLM presents huge opportunities for 6G communications, e.g., network optimization and management by allowing users to input task requirements to LLMs by nature language. However, directly applying native LLMs in 6G encounters various challenges, such as a lack of private communication data and knowledge, limited logical reasoning, evaluation, and refinement abilities. Integrating LLMs with the capabilities of retrieval, planning, memory, evaluation and reflection in agents can greatly enhance the potential of LLMs for 6G communications. To this end, we propose a multi-agent system with customized communication knowledge and tools for solving communication related tasks using natural language, comprising three components: (1) Multi-agent Data Retrieval (MDR), which employs the condensate and inference agents to refine and summarize communication knowledge from the knowledge base, expanding the knowledge boundaries of LLMs in 6G communications; (2) Multi-agent Collaborative Planning (MCP), which utilizes multiple planning agents to generate feasible solutions for the communication related task from different perspectives based on the retrieved knowledge; (3) Multi-agent Evaluation and Reflecxion (MER), which utilizes the evaluation agent to assess the solutions, and applies the reflexion agent and refinement agent to provide improvement suggestions for current solutions. Finally, we validate the effectiveness of the proposed multi-agent system by designing a semantic communication system, as a case study of 6G communications.
- Z. Chen, Z. Zhang, and Z. Yang, “Big ai models for 6g wireless networks: Opportunities, challenges, and research directions,” arXiv preprint arXiv:2308.06250, 2023.
- F. Jiang et al., “Large ai model-based semantic communications,” arXiv preprint arXiv:2307.03492, 2023.
- J. Zhong et al., “A safer vision-based autonomous planning system for quadrotor uavs with dynamic obstacle trajectory prediction and its application with llms,” arXiv preprint arXiv:2311.12893, 2023.
- Y. Shen et al., “Large language models empowered autonomous edge ai for connected intelligence,” arXiv preprint arXiv:2307.02779, 2023.
- S. Chan et al., “Data distributional properties drive emergent in-context learning in transformers,” Advances in Neural Information Processing Systems, vol. 35, pp. 18 878–18 891, 2022.
- J. Wei et al., “Chain-of-thought prompting elicits reasoning in large language models,” Advances in Neural Information Processing Systems, vol. 35, pp. 24 824–24 837, 2022.
- Q. Wu et al., “Autogen: Enabling next-gen llm applications via multi-agent conversation framework,” arXiv preprint arXiv:2308.08155, 2023.
- O. Topsakal and T. C. Akinci, “Creating large language model applications utilizing langchain: A primer on developing llm apps fast,” in International Conference on Applied Engineering and Natural Sciences, vol. 1, no. 1, 2023, pp. 1050–1056.
- W. Luan et al., “Mptr: A maximal-marginal-relevance-based personalized trip recommendation method,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 11, pp. 3461–3474, 2018.
- L. Wang et al., “Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models,” arXiv preprint arXiv:2305.04091, 2023.
- N. Shinn, B. Labash, and A. Gopinath, “Reflexion: an autonomous agent with dynamic memory and self-reflection,” arXiv preprint arXiv:2303.11366, 2023.
- A. Madaan et al., “Self-refine: Iterative refinement with self-feedback,” arXiv preprint arXiv:2303.17651, 2023.
- W. Yang et al., “Semantic communication meets edge intelligence,” IEEE Wireless Communications, vol. 29, no. 5, pp. 28–35, 2022.
- C. Danescu-Niculescu-Mizil et al., “You had me at hello: How phrasing affects memorability,” in Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2012, pp. 892–901.
- N. Reimers and I. Gurevych, “Sentence-bert: Sentence embeddings using siamese bert-networks,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 3982–3992.
- Feibo Jiang (24 papers)
- Li Dong (154 papers)
- Yubo Peng (15 papers)
- Kezhi Wang (106 papers)
- Kun Yang (227 papers)
- Cunhua Pan (210 papers)
- Dusit Niyato (671 papers)
- Octavia A. Dobre (187 papers)