Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating Split Federated Learning over Wireless Communication Networks (2310.15584v1)

Published 24 Oct 2023 in cs.LG, cs.NI, and eess.SP

Abstract: The development of AI provides opportunities for the promotion of deep neural network (DNN)-based applications. However, the large amount of parameters and computational complexity of DNN makes it difficult to deploy it on edge devices which are resource-constrained. An efficient method to address this challenge is model partition/splitting, in which DNN is divided into two parts which are deployed on device and server respectively for co-training or co-inference. In this paper, we consider a split federated learning (SFL) framework that combines the parallel model training mechanism of federated learning (FL) and the model splitting structure of split learning (SL). We consider a practical scenario of heterogeneous devices with individual split points of DNN. We formulate a joint problem of split point selection and bandwidth allocation to minimize the system latency. By using alternating optimization, we decompose the problem into two sub-problems and solve them optimally. Experiment results demonstrate the superiority of our work in latency reduction and accuracy improvement.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ce Xu (70 papers)
  2. Jinxuan Li (2 papers)
  3. Yuan Liu (342 papers)
  4. Yushi Ling (1 paper)
  5. Miaowen Wen (69 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.