Memory-Efficient Split Federated Learning for LLM Fine-Tuning on Heterogeneous Mobile Devices (2506.02940v1)
Abstract: In this paper, we propose an edge-assisted split federated learning framework to facilitate LLM fine-tuning on heterogeneous mobile devices while alleviating memory pressures on both mobile devices and the edge server. Specifically, mobile devices perform low-rank adaptation (LoRA) fine-tuning on only a subset of lower layers of the pre-trained LLM, tailored to their individual capacities. On the server, a full LLM is maintained, and the corresponding LoRA modules are selectively fine-tuned in a sequential manner for each device. To further enhance training efficiency, we propose a server-side training scheduling method that optimizes the processing order of devices for accelerating fine-tuning. Extensive experiments demonstrate that compared to the baselines, our scheme can reduce 79\% memory footprint and 6\% training time while achieving comparable performance.
- Xiaopei Chen (4 papers)
- Liang Li (297 papers)
- Fei Ji (17 papers)
- Wen Wu (103 papers)