Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimization of Federated Learning's Client Selection for Non-IID Data Based on Grey Relational Analysis (2310.08147v3)

Published 12 Oct 2023 in cs.DC

Abstract: Federated learning (FL) is a novel distributed learning framework designed for applications with privacy-sensitive data. Without sharing data, FL trains local models on individual devices and constructs the global model on the server by performing model aggregation. However, to reduce the communication cost, the participants in each training round are randomly selected, which significantly decreases the training efficiency under data and device heterogeneity. To address this issue, in this paper, we introduce a novel approach that considers the data distribution and computational resources of devices to select the clients for each training round. Our proposed method performs client selection based on the Grey Relational Analysis (GRA) theory by considering available computational resources for each client, the training loss, and weight divergence. To examine the usability of our proposed method, we implement our contribution on Amazon Web Services (AWS) by using the TensorFlow library of Python. We evaluate our algorithm's performance in different setups by varying the learning rate, network size, the number of selected clients, and the client selection round. The evaluation results show that our proposed algorithm enhances the performance significantly in terms of test accuracy and the average client's waiting time compared to state-of-the-art methods, federated averaging and Pow-d.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. S. Deng, H. Zhao, W. Fang, J. Yin, S. Dustdar, and A. Y. Zomaya, “Edge intelligence: The confluence of edge computing and artificial intelligence,” IEEE Internet of Things Journal, vol. 7, no. 8, pp. 7457–7469, 2020.
  2. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, vol. 54.   PMLR, 2017, pp. 1273–1282.
  3. S. Abdulrahman, H. Tout, H. Ould-Slimane, A. Mourad, C. Talhi, and M. Guizani, “A survey on federated learning: The journey from centralized to distributed on-site learning and beyond,” IEEE Internet of Things Journal, vol. 8, no. 7, pp. 5476–5497, 2021.
  4. Y. Deng, F. Lyu, J. Ren, H. Wu, Y. Zhou, Y. Zhang, and X. Shen, “Auction: Automated and quality-aware client selection framework for efficient federated learning,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 8, pp. 1996–2009, 2021.
  5. T. Nishio and R. Yonetani, “Client selection for federated learning with heterogeneous resources in mobile edge,” in ICC 2019-2019 IEEE international conference on communications (ICC).   IEEE, 2019, pp. 1–7.
  6. Y. J. Cho, J. Wang, and G. Joshi, “Client selection in federated learning: Convergence analysis and power-of-choice selection strategies,” in Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), 2022.
  7. Y. Liu, Y. Dong, H. Wang, H. Jiang, and Q. Xu, “Distributed fog computing and federated-learning-enabled secure aggregation for iot devices,” IEEE Internet of Things Journal, vol. 9, no. 21, pp. 21 025–21 037, 2022.
  8. W. Y. B. Lim, Z. Xiong, C. Miao, D. Niyato, Q. Yang, C. Leung, and H. V. Poor, “Hierarchical incentive mechanism design for federated machine learning in mobile networks,” IEEE Internet of Things Journal, vol. 7, no. 10, pp. 9575–9588, 2020.
  9. R. Zeng, S. Zhang, J. Wang, and X. Chu, “Fmore: An incentive scheme of multi-dimensional auction for federated learning in mec,” in 2020 IEEE 40th international conference on distributed computing systems (ICDCS).   IEEE, 2020, pp. 278–288.
  10. C. Li, X. Zeng, M. Zhang, and Z. Cao, “Pyramidfl: A fine-grained client selection framework for efficient federated learning,” in Proceedings of the 28th Annual International Conference on Mobile Computing And Networking, 2022, pp. 158–171.
  11. F. Lai, X. Zhu, H. V. Madhyastha, and M. Chowdhury, “Oort: Efficient federated learning via guided participant selection.” in OSDI, 2021, pp. 19–35.
  12. T. Huang, W. Lin, W. Wu, L. He, K. Li, and A. Y. Zomaya, “An efficiency-boosting client selection scheme for federated learning with fairness guarantee,” IEEE Transactions on Parallel and Distributed Systems, vol. 32, no. 7, pp. 1552–1564, 2020.
  13. Z. Hu, K. Shaloudegi, G. Zhang, and Y. Yu, “Federated learning meets multi-objective optimization,” IEEE Transactions on Network Science and Engineering, vol. 9, no. 4, pp. 2039–2051, 2022.
  14. Q. Li, Y. Diao, Q. Chen, and B. He, “Federated learning on non-iid data silos: An experimental study,” in 2022 IEEE 38th International Conference on Data Engineering (ICDE), 2022, pp. 965–978.
  15. X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the convergence of fedavg on non-iid data,” in International Conference on Learning Representations, 2020.
  16. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” in Proceedings of Machine Learning and Systems, I. Dhillon, D. Papailiopoulos, and V. Sze, Eds., vol. 2, 2020, pp. 429–450.
  17. C. T. Dinh, N. H. Tran, and T. D. Nguyen, “Personalized federated learning with moreau envelopes,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, ser. NIPS’20.   Red Hook, NY, USA: Curran Associates Inc., 2020.
  18. V. Smith, C.-K. Chiang, M. Sanjabi, and A. Talwalkar, “Federated multi-task learning,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, ser. NIPS’17.   Red Hook, NY, USA: Curran Associates Inc., 2017, p. 4427–4437.
  19. A. Fallah, A. Mokhtari, and A. Ozdaglar, “Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, ser. NIPS’20.   Red Hook, NY, USA: Curran Associates Inc., 2020.
  20. T. B. Johnson and C. Guestrin, “Training deep models faster with robust, approximate importance sampling,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  21. Z. Zhao and G. Joshi, “A dynamic reweighting strategy for fair federated learning,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2022, pp. 8772–8776.
  22. R. Saha, S. Misra, A. Chakraborty, C. Chatterjee, and P. K. Deb, “Data-centric client selection for federated learning over distributed edge networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 34, no. 2, pp. 675–686, 2022.
  23. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020.
  24. C. Zhou, H. Tian, H. Zhang, J. Zhang, M. Dong, and J. Jia, “Tea-fed: time-efficient asynchronous federated learning for edge computing,” in Proceedings of the 18th ACM International Conference on Computing Frontiers, 2021, pp. 30–37.
  25. C. T Dinh, N. Tran, and J. Nguyen, “Personalized federated learning with moreau envelopes,” Advances in Neural Information Processing Systems, vol. 33, pp. 21 394–21 405, 2020.
  26. A. Z. Tan, H. Yu, L. Cui, and Q. Yang, “Towards personalized federated learning,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  27. L. You, S. Liu, Y. Chang, and C. Yuen, “A triple-step asynchronous federated learning mechanism for client activation, interaction optimization, and aggregation enhancement,” IEEE Internet of Things Journal, vol. 9, no. 23, pp. 24 199–24 211, 2022.
  28. C. Xie, S. Koyejo, and I. Gupta, “Asynchronous federated optimization,” in Proceedings of the 12th Annual Workshop on Optimization and Machine Learning (OPT), 2020, pp. 1–9.
  29. A. Rodio, F. Faticanti, O. Marfoq, G. Neglia, and E. Leonardi, “Federated learning under heterogeneous and correlated client availability,” in INFOCOM.   IEEE, 2023.
  30. M. Mohri, G. Sivek, and A. T. Suresh, “Agnostic federated learning,” in ICML.   PMLR, 2019, pp. 4615–4625.
  31. L. Lyu, X. Xu, Q. Wang, and H. Yu, “Collaborative fairness in federated learning,” Federated Learning: Privacy and Incentive, pp. 189–204, 2020.
  32. Y. Yang and Z. Xu, “Rethinking the value of labels for improving class-imbalanced learning,” Advances in neural information processing systems, vol. 33, pp. 19 290–19 301, 2020.
  33. Y. Zhang, B. Kang, B. Hooi, S. Yan, and J. Feng, “Deep long-tailed learning: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  34. M. Buda, A. Maki, and M. A. Mazurowski, “A systematic study of the class imbalance problem in convolutional neural networks,” Neural networks, vol. 106, pp. 249–259, 2018.
  35. V. Feldman and C. Zhang, “What neural networks memorize and why: Discovering the long tail via influence estimation,” Advances in Neural Information Processing Systems, vol. 33, pp. 2881–2891, 2020.
  36. M. Mitzenmacher, “The power of two choices in randomized load balancing,” IEEE Transactions on Parallel and Distributed Systems, vol. 12, no. 10, pp. 1094–1104, 2001.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Shuaijun Chen (13 papers)
  2. Omid Tavallaie (10 papers)
  3. Michael Henri Hambali (1 paper)
  4. Seid Miad Zandavi (10 papers)
  5. Hamed Haddadi (131 papers)
  6. Nicholas Lane (14 papers)
  7. Song Guo (138 papers)
  8. Albert Y. Zomaya (50 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.