Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Split Learning in 6G Edge Networks (2306.12194v3)

Published 21 Jun 2023 in cs.LG, cs.DC, and cs.NI

Abstract: With the proliferation of distributed edge computing resources, the 6G mobile network will evolve into a network for connected intelligence. Along this line, the proposal to incorporate federated learning into the mobile edge has gained considerable interest in recent years. However, the deployment of federated learning faces substantial challenges as massive resource-limited IoT devices can hardly support on-device model training. This leads to the emergence of split learning (SL) which enables servers to handle the major training workload while still enhancing data privacy. In this article, we offer a brief overview of key advancements in SL and articulate its seamless integration with wireless edge networks. We begin by illustrating the tailored 6G architecture to support edge SL. Then, we examine the critical design issues for edge SL, including innovative resource-efficient learning frameworks and resource management strategies under a single edge server. Additionally, we expand the scope to multi-edge scenarios, exploring multi-edge collaboration and mobility management from a networking perspective. Finally, we discuss open problems for edge SL, including convergence analysis, asynchronous SL and U-shaped SL.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. 3GPP. “Study on Traffic Characteristics and Performance Requirements for AI/ML Model Transfer in 5GS”. 3rd Generation Partnership Project (3GPP), Technical Specification (TS) 22.874, 2021, version 18.2.0., Dec. 2021.
  2. J. Shao and J. Zhang, “Communication-computation Trade-off in Resource-constrained Edge Inference,” IEEE Commun. Mag., vol. 58, no. 12, pp. 20–26, Dec. 2020.
  3. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient Learning of Deep Networks From Decentralized Data,” in Proc. AISTATS, Apr. 2017.
  4. P. Vepakomma, O. Gupta, T. Swedish, and R. Raskar, “Split Learning for Health: Distributed Deep Learning Without Sharing Raw Patient Data,” arXiv preprint arXiv:1812.00564, Dec. 2018.
  5. C. Thapa, P. C. M. Arachchige, S. Camtepe, and L. Sun, “Splitfed: When Federated Learning Meets Split Learning,” in Proc. AAAI, Feb. 2022.
  6. M. Kim, A. DeRieux, and W. Saad, “A Bargaining Game for Personalized, Energy Efficient Split Learning over Wireless Networks,” in Proc. WCNC, Mar. 2023.
  7. P. Joshi, C. Thapa, S. Camtepe, M. Hasanuzzamana, T. Scully, and H. Afli, “Splitfed Learning Without Client-side Synchronization: Analyzing Client-side Split Network Portion Size to Overall Performance,” arXiv preprint arXiv:2109.09246, Sep. 2021.
  8. Z. Lin, G. Zhu, Y. Deng, X. Chen, Y. Gao, K. Huang, and Y. Fang, “Efficient Parallel Split Learning over Resource-constrained Wireless Edge Networks,” arXiv preprint arXiv:2303.15991, Mar. 2023.
  9. C.-Y. Hsieh, Y.-C. Chuang, and A.-Y. Wu, “C3-SL: Circular Convolution-Based Batch-Wise Compression for Communication-Efficient Split Learning,” in Proc. MLSP, Aug. 2022.
  10. L. Deng, G. Li, S. Han, L. Shi, and Y. Xie, “Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey,” Proc IEEE Inst Electr Electron Eng, vol. 108, no. 4, pp. 485–532, Apr. 2020.
  11. W. Wu, M. Li, K. Qu, C. Zhou, X. Shen, W. Zhuang, X. Li, and W. Shi, “Split Learning over Wireless Networks: Parallel Design and Resource Management,” IEEE J. Sel. Areas Commun., vol. 41, no. 4, pp. 1051–1066, Apr. 2023.
  12. X. Chen, G. Zhu, Y. Deng, and Y. Fang, “Federated Learning over Multihop Wireless Networks With In-Network Aggregation,” IEEE Trans. Wirel. Commun., vol. 21, no. 6, pp. 4622–4634, Apr. 2022.
  13. Y. J. Cho, J. Wang, and G. Joshi, “Towards Understanding Biased Client Selection in Federated Learning,” in Proc. AISTATS, Mar. 2022.
  14. S. Wang, X. Zhang, H. Uchiyama, and H. Matsuda, “HiveMind: Towards Cellular Native Machine Learning Model Splitting,” IEEE J. Sel. Areas Commun., vol. 40, no. 2, pp. 626–640, Feb. 2022.
Citations (49)

Summary

We haven't generated a summary for this paper yet.