Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When Large Language Model Agents Meet 6G Networks: Perception, Grounding, and Alignment (2401.07764v2)

Published 15 Jan 2024 in cs.AI and cs.NI

Abstract: AI agents based on multimodal LLMs are expected to revolutionize human-computer interaction and offer more personalized assistant services across various domains like healthcare, education, manufacturing, and entertainment. Deploying LLM agents in 6G networks enables users to access previously expensive AI assistant services via mobile devices democratically, thereby reducing interaction latency and better preserving user privacy. Nevertheless, the limited capacity of mobile devices constrains the effectiveness of deploying and executing local LLMs, which necessitates offloading complex tasks to global LLMs running on edge servers during long-horizon interactions. In this article, we propose a split learning system for LLM agents in 6G networks leveraging the collaboration between mobile devices and edge servers, where multiple LLMs with different roles are distributed across mobile devices and edge servers to perform user-agent interactive tasks collaboratively. In the proposed system, LLM agents are split into perception, grounding, and alignment modules, facilitating inter-module communications to meet extended user requirements on 6G network functions, including integrated sensing and communication, digital twins, and task-oriented communications. Furthermore, we introduce a novel model caching algorithm for LLMs within the proposed system to improve model utilization in context, thus reducing network costs of the collaborative mobile and edge LLM agents.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Minrui Xu (57 papers)
  2. Jiawen Kang (204 papers)
  3. Zehui Xiong (177 papers)
  4. Shiwen Mao (96 papers)
  5. Zhu Han (431 papers)
  6. Dong In Kim (168 papers)
  7. Khaled B. Letaief (209 papers)
  8. Dusit Niyato (671 papers)
Citations (17)