Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SplitLLM: Hierarchical Split Learning for Large Language Model over Wireless Network (2501.13318v1)

Published 23 Jan 2025 in cs.DC

Abstract: Fine-tuning a LLM using the local data of edge users can enable personalized services and applications. For privacy protection, the prevalent solution adopts distributed learning for fine-tuning and integrates low-rank adaptation (LoRA) to reduce users' computational load. However, as the number of users increases, numerous users simultaneously communicate with the server, and multiple server-side models concurrently execute on the server, leading to significant communication congestion and memory pressure. In this paper, we propose a split learning (SL) scheme for fine-tuning LLM in wireless networks, which involves one cloud server, a small number of edge servers, and multiple users. Specifically, the pre-trained model and LoRA adapters are divided into three parts and deployed across the cloud, edge, and user sides. The training process follows the sequence of user, edge, and cloud, with forward and backward propagation achieved by transmitting activation and gradient. In each round, all edge servers and an equivalent number of users train in parallel, and only the LoRA adapters are updated. At the end of each round, all edge-side and user-side LoRA adapters are uploaded to the cloud for aggregation. Extensive simulation demonstrates that the proposed scheme can reduce peak memory usage up to 74% compared to the state-of-the-art benchmarks.

Summary

We haven't generated a summary for this paper yet.