Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive and Parallel Split Federated Learning in Vehicular Edge Computing (2405.18707v1)

Published 29 May 2024 in cs.LG, cs.AI, and cs.NI

Abstract: Vehicular edge intelligence (VEI) is a promising paradigm for enabling future intelligent transportation systems by accommodating AI at the vehicular edge computing (VEC) system. Federated learning (FL) stands as one of the fundamental technologies facilitating collaborative model training locally and aggregation, while safeguarding the privacy of vehicle data in VEI. However, traditional FL faces challenges in adapting to vehicle heterogeneity, training large models on resource-constrained vehicles, and remaining susceptible to model weight privacy leakage. Meanwhile, split learning (SL) is proposed as a promising collaborative learning framework which can mitigate the risk of model wights leakage, and release the training workload on vehicles. SL sequentially trains a model between a vehicle and an edge cloud (EC) by dividing the entire model into a vehicle-side model and an EC-side model at a given cut layer. In this work, we combine the advantages of SL and FL to develop an Adaptive Split Federated Learning scheme for Vehicular Edge Computing (ASFV). The ASFV scheme adaptively splits the model and parallelizes the training process, taking into account mobile vehicle selection and resource allocation. Our extensive simulations, conducted on non-independent and identically distributed data, demonstrate that the proposed ASFV solution significantly reduces training latency compared to existing benchmarks, while adapting to network dynamics and vehicles' mobility.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. M. Alam, J. Ferreira, and J. Fonseca, “Introduction to intelligent transportation systems,” Intelligent transportation systems: Dependable vehicular communications for improved road safety, pp. 1–17, 2016.
  2. X. Zhang, Z. Chang, T. Hu, W. Chen, X. Zhang, and G. Min, “Vehicle selection and resource allocation for federated learning-assisted vehicular network,” IEEE Transactions on Mobile Computing, 2023.
  3. T. Gong, L. Zhu, F. R. Yu, and T. Tang, “Edge intelligence in intelligent transportation systems: A survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 9, pp. 8919–8944, 2023.
  4. Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edge intelligence: Paving the last mile of artificial intelligence with edge computing,” Proceedings of the IEEE, vol. 107, no. 8, pp. 1738–1762, 2019.
  5. Y. Wu, K. Zhang, and Y. Zhang, “Digital twin networks: A survey,” IEEE Internet of Things Journal, vol. 8, no. 18, pp. 13 789–13 804, 2021.
  6. H. Ye, L. Liang, G. Y. Li, J. Kim, L. Lu, and M. Wu, “Machine learning for vehicular networks: Recent advances and application examples,” ieee vehicular technology magazine, vol. 13, no. 2, pp. 94–101, 2018.
  7. P. Li, J. Li, Z. Huang, T. Li, C.-Z. Gao, S.-M. Yiu, and K. Chen, “Multi-key privacy-preserving deep learning in cloud computing,” Future Generation Computer Systems, vol. 74, pp. 76–85, 2017.
  8. J. Konečnỳ, B. McMahan, and D. Ramage, “Federated optimization: Distributed optimization beyond the datacenter,” arXiv preprint arXiv:1511.03575, 2015.
  9. M. F. Pervej, R. Jin, and H. Dai, “Resource constrained vehicular edge federated learning with highly mobile connected vehicles,” IEEE Journal on Selected Areas in Communications, 2023.
  10. X. Zhang, J. Liu, T. Hu, Z. Chang, Y. Zhang, and G. Min, “Federated learning-assisted vehicular edge computing: Architecture and research directions,” IEEE Vehicular Technology Magazine, pp. 2–11, 2023.
  11. Z. Yu, J. Hu, G. Min, Z. Zhao, W. Miao, and M. S. Hossain, “Mobility-aware proactive edge caching for connected vehicles using federated learning,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 8, pp. 5341–5351, 2020.
  12. C. Yang, M. Xu, Q. Wang, Z. Chen, K. Huang, Y. Ma, K. Bian, G. Huang, Y. Liu, X. Jin et al., “Flash: Heterogeneity-aware federated learning at scale,” IEEE Transactions on Mobile Computing, 2022.
  13. J. Shen, N. Cheng, X. Wang, F. Lyu, W. Xu, Z. Liu, K. Aldubaikhy, and X. Shen, “RingSFL: An Adaptive Split Federated Learning Towards Taming Client Heterogeneity,” 5 2023.
  14. O. Gupta and R. Raskar, “Distributed learning of deep neural network over multiple agents,” Journal of Network and Computer Applications, vol. 116, pp. 1–8, 2018.
  15. P. Vepakomma, O. Gupta, T. Swedish, and R. Raskar, “Split learning for health: Distributed deep learning without sharing raw patient data,” arXiv preprint arXiv:1812.00564, 2018.
  16. S. Otoum, N. Guizani, and H. Mouftah, “On the feasibility of split learning, transfer learning and federated learning for preserving security in its systems,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 7, pp. 7462–7470, 2023.
  17. C. Thapa, P. C. M. Arachchige, S. Camtepe, and L. Sun, “Splitfed: When federated learning meets split learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, 2022, pp. 8485–8493.
  18. W. Wu, M. Li, K. Qu, C. Zhou, X. Shen, W. Zhuang, X. Li, and W. Shi, “Split learning over wireless networks: Parallel design and resource management,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 4, pp. 1051–1066, 2023.
  19. R. Chen, L. Li, K. Xue, C. Zhang, M. Pan, and Y. Fang, “Energy efficient federated learning over heterogeneous mobile devices via joint design of weight quantization and wireless transmission,” IEEE Transactions on Mobile Computing, 2022.
  20. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, A. Singh and J. Zhu, Eds., vol. 54.   PMLR, 20–22 Apr 2017, pp. 1273–1282. [Online]. Available: https://proceedings.mlr.press/v54/mcmahan17a.html
  21. L. Liu, J. Zhang, S. Song, and K. B. Letaief, “Client-edge-cloud hierarchical federated learning,” in ICC 2020-2020 IEEE International Conference on Communications (ICC).   IEEE, 2020, pp. 1–6.
  22. S. Luo, X. Chen, Q. Wu, Z. Zhou, and S. Yu, “Hfel: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning,” IEEE Transactions on Wireless Communications, vol. 19, no. 10, pp. 6535–6548, 2020.
  23. Y. Fu, H. Guo, M. Li, X. Yang, Y. Ding, V. Chandra, and Y. Lin, “Cpt: Efficient deep neural network training via cyclic precision,” arXiv preprint arXiv:2101.09868, 2021.
  24. D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic, “Qsgd: Communication-efficient sgd via gradient quantization and encoding,” Advances in neural information processing systems, vol. 30, 2017.
  25. R. Chen, D. Shi, X. Qin, D. Liu, M. Pan, and S. Cui, “Service delay minimization for federated learning over mobile devices,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 4, pp. 990–1006, 2023.
  26. T. T. Vu, D. T. Ngo, N. H. Tran, H. Q. Ngo, M. N. Dao, and R. H. Middleton, “Cell-free massive mimo for wireless federated learning,” IEEE Transactions on Wireless Communications, vol. 19, no. 10, pp. 6377–6392, 2020.
  27. J. Ren, G. Yu, and G. Ding, “Accelerating dnn training in wireless federated edge learning systems,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 1, pp. 219–232, 2020.
  28. T. Hu, X. Zhang, Z. Chang, F. Hu, and T. Hämäläinen, “Communication-efficient federated learning in channel constrained internet of things,” in GLOBECOM 2022-2022 IEEE Global Communications Conference.   IEEE, 2022, pp. 275–280.
  29. O. Gupta and R. Raskar, “Distributed learning of deep neural network over multiple agents,” Journal of Network and Computer Applications, vol. 116, pp. 1–8, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1084804518301590
  30. M. G. Poirot, P. Vepakomma, K. Chang, J. Kalpathy-Cramer, R. Gupta, and R. Raskar, “Split learning for collaborative deep learning in healthcare,” arXiv preprint arXiv:1912.12115, 2019.
  31. X. Liu, Y. Deng, and T. Mahmoodi, “Wireless distributed learning: a new hybrid split and federated learning approach,” IEEE Transactions on Wireless Communications, vol. 22, no. 4, pp. 2650–2665, 2022.
  32. L. Zhang and J. Xu, “Learning the optimal partition for collaborative dnn training with privacy requirements,” IEEE Internet of Things Journal, vol. 9, no. 13, pp. 11 168–11 178, 2021.
  33. T. Zeng, O. Semiari, M. Chen, W. Saad, and M. Bennis, “Federated learning on the road autonomous controller design for connected and autonomous vehicles,” IEEE Transactions on Wireless Communications, vol. 21, no. 12, pp. 10 407–10 423, 2022.
  34. J. Zhao, X. Chang, Y. Feng, C. H. Liu, and N. Liu, “Participant selection for federated learning with heterogeneous data in intelligent transport system,” IEEE transactions on intelligent transportation systems, vol. 24, no. 1, pp. 1106–1115, 2022.
  35. S. Moon and Y. Lim, “Split and federated learning with mobility in vehicular edge computing,” in 2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA).   IEEE, 2023, pp. 35–38.
  36. X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6848–6856.
  37. Q. Zeng, Y. Du, K. Huang, and K. K. Leung, “Energy-efficient resource management for federated edge learning with cpu-gpu heterogeneous computing,” IEEE Transactions on Wireless Communications, vol. 20, no. 12, pp. 7947–7962, 2021.
  38. M. M. Amiri, D. Gündüz, S. R. Kulkarni, and H. V. Poor, “Convergence of update aware device scheduling for federated learning at the wireless edge,” IEEE Transactions on Wireless Communications, vol. 20, no. 6, pp. 3643–3658, 2021.
  39. X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the convergence of fedavg on non-iid data,” in International Conference on Learning Representations, 2019.
  40. A. Bejaoui, K.-H. Park, and M.-S. Alouini, “A qos-oriented trajectory optimization in swarming unmanned-aerial-vehicles communications,” IEEE Wireless Communications Letters, vol. 9, no. 6, pp. 791–794, 2020.
  41. I. M. Bomze, V. F. Demyanov, R. Fletcher, T. Terlaky, I. Pólik, and T. Terlaky, “Interior point methods for nonlinear optimization,” Nonlinear Optimization: Lectures given at the CIME Summer School held in Cetraro, Italy, July 1-7, 2007, pp. 215–276, 2010.
  42. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  43. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
  44. A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.
  45. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020.
  46. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  47. A. Katharopoulos and F. Fleuret, “Not all samples are created equal: Deep learning with importance sampling,” in International conference on machine learning.   PMLR, 2018, pp. 2525–2534.
  48. M. G. Poirot, P. Vepakomma, K. Chang, J. Kalpathy-Cramer, R. Gupta, and R. Raskar, “Split learning for collaborative deep learning in healthcare,” CoRR, vol. abs/1912.12115, 2019. [Online]. Available: http://arxiv.org/abs/1912.12115
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xianke Qiang (5 papers)
  2. Zheng Chang (45 papers)
  3. Yun Hu (7 papers)
  4. Lei Liu (332 papers)
  5. Timo Hamalainen (6 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com