Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ESFL: Efficient Split Federated Learning over Resource-Constrained Heterogeneous Wireless Devices (2402.15903v2)

Published 24 Feb 2024 in cs.LG, cs.AI, and cs.NI

Abstract: Federated learning (FL) allows multiple parties (distributed devices) to train a machine learning model without sharing raw data. How to effectively and efficiently utilize the resources on devices and the central server is a highly interesting yet challenging problem. In this paper, we propose an efficient split federated learning algorithm (ESFL) to take full advantage of the powerful computing capabilities at a central server under a split federated learning framework with heterogeneous end devices (EDs). By splitting the model into different submodels between the server and EDs, our approach jointly optimizes user-side workload and server-side computing resource allocation by considering users' heterogeneity. We formulate the whole optimization problem as a mixed-integer non-linear program, which is an NP-hard problem, and develop an iterative approach to obtain an approximate solution efficiently. Extensive simulations have been conducted to validate the significantly increased efficiency of our ESFL approach compared with standard federated learning, split learning, and splitfed learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. L. U. Khan, W. Saad, Z. Han, E. Hossain, and C. S. Hong, “Federated learning for internet of things: Recent advances, taxonomy, and open challenges,” IEEE Communications Surveys & Tutorials, 2021.
  2. D. C. Nguyen, M. Ding, P. N. Pathirana, A. Seneviratne, J. Li, and H. V. Poor, “Federated learning for internet of things: A comprehensive survey,” IEEE Communications Surveys & Tutorials, 2021.
  3. J. Qiu, Q. Wu, G. Ding, Y. Xu, and S. Feng, “A survey of machine learning for big data processing,” EURASIP Journal on Advances in Signal Processing, vol. 2016, no. 1, pp. 1–16, 2016.
  4. C. Zhang, Y. Xie, H. Bai, B. Yu, W. Li, and Y. Gao, “A survey on federated learning,” Knowledge-Based Systems, vol. 216, p. 106775, 2021.
  5. K. Xu, Y. Guo, L. Guo, Y. Fang, and X. Li, “Control of photo sharing over online social networks,” in 2014 IEEE Global Communications Conference.   IEEE, 2014, pp. 704–709.
  6. K. Xu, H. Yue, L. Guo, Y. Guo, and Y. Fang, “Privacy-preserving machine learning algorithms for big data systems,” in 2015 IEEE 35th international conference on distributed computing systems.   IEEE, 2015, pp. 318–327.
  7. K. Xu, Y. Guo, L. Guo, Y. Fang, and X. Li, “My privacy my decision: Control of photo sharing on online social networks,” IEEE Transactions on Dependable and Secure Computing, vol. 14, no. 2, pp. 199–210, 2017.
  8. R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 2015, pp. 1310–1321.
  9. Y. Gong, Y. Fang, and Y. Guo, “Privacy-preserving collaborative learning for mobile health monitoring,” in 2015 IEEE Global Communications Conference (GLOBECOM).   IEEE, 2015, pp. 1–6.
  10. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial Intelligence and Statistics.   PMLR, 2017, pp. 1273–1282.
  11. C. Thapa, P. C. M. Arachchige, S. Camtepe, and L. Sun, “Splitfed: When federated learning meets split learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, 2022, pp. 8485–8493.
  12. P. Vepakomma, O. Gupta, T. Swedish, and R. Raskar, “Split learning for health: Distributed deep learning without sharing raw patient data,” arXiv preprint arXiv:1812.00564, 2018.
  13. F. Ang, L. Chen, N. Zhao, Y. Chen, W. Wang, and F. R. Yu, “Robust federated learning with noisy communication,” IEEE Transactions on Communications, vol. 68, no. 6, pp. 3452–3464, 2020.
  14. L. Wang, W. Wang, and B. Li, “Cmfl: Mitigating communication overhead for federated learning,” in 2019 IEEE 39th international conference on distributed computing systems (ICDCS).   IEEE, 2019, pp. 954–964.
  15. X. Yao, C. Huang, and L. Sun, “Two-stream federated learning: Reduce the communication costs,” in 2018 IEEE Visual Communications and Image Processing (VCIP).   IEEE, 2018, pp. 1–4.
  16. W. Shi, S. Zhou, Z. Niu, M. Jiang, and L. Geng, “Joint device scheduling and resource allocation for latency constrained wireless federated learning,” IEEE Transactions on Wireless Communications, vol. 20, no. 1, pp. 453–467, 2020.
  17. Y. Gao, M. Kim, S. Abuadbba, Y. Kim, C. Thapa, K. Kim, S. A. Camtepe, H. Kim, and S. Nepal, “End-to-end evaluation of federated learning and split learning for internet of things,” arXiv preprint arXiv:2003.13376, 2020.
  18. Z. Lin, G. Zhu, Y. Deng, X. Chen, Y. Gao, K. Huang, and Y. Fang, “Efficient parallel split learning over resource-constrained wireless edge networks,” arXiv preprint arXiv:2303.15991, 2023.
  19. W. Wu, M. Li, K. Qu, C. Zhou, W. Zhuang, X. Li, W. Shi et al., “Split learning over wireless networks: Parallel design and resource management,” IEEE J. Sel. Areas Commun., vol. 41, no. 4, pp. 1051 – 1066, Feb. 2022.
  20. M. Kim, A. DeRieux, and W. Saad, “A bargaining game for personalized, energy efficient split learning over wireless networks,” arXiv preprint arXiv:2212.06107, 2022.
  21. F. Sattler, S. Wiedemann, K.-R. Müller, and W. Samek, “Robust and communication-efficient federated learning from non-iid data,” IEEE transactions on neural networks and learning systems, vol. 31, no. 9, pp. 3400–3413, 2019.
  22. Y. Deng, X. Chen, G. Zhu, Y. Fang, Z. Chen, and X. Deng, “Actions at the edge: Jointly optimizing the resources in multi-access edge computing,” IEEE Wireless Communications, vol. 29, no. 2, pp. 192–198, 2022.
  23. X. Wang, Y. Han, C. Wang, Q. Zhao, X. Chen, and M. Chen, “In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning,” IEEE Network, vol. 33, no. 5, pp. 156–165, 2019.
  24. W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. Jiao, Y.-C. Liang, Q. Yang, D. Niyato, and C. Miao, “Federated learning in mobile edge networks: A comprehensive survey,” IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 2031–2063, 2020.
  25. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  26. A. Krizhevsky and G. Hinton, “Convolutional deep belief networks on cifar-10,” Unpublished manuscript, vol. 40, no. 7, pp. 1–9, 2010.
  27. O. Gupta and R. Raskar, “Distributed learning of deep neural network over multiple agents,” Journal of Network and Computer Applications, vol. 116, pp. 1–8, 2018.
Citations (4)

Summary

We haven't generated a summary for this paper yet.