Evaluating Multi-Global Server Architecture for Federated Learning
Abstract: Federated learning (FL) with a single global server framework is currently a popular approach for training machine learning models on decentralized environment, such as mobile devices and edge devices. However, the centralized server architecture poses a risk as any challenge on the central/global server would result in the failure of the entire system. To minimize this risk, we propose a novel federated learning framework that leverages the deployment of multiple global servers. We posit that implementing multiple global servers in federated learning can enhance efficiency by capitalizing on local collaborations and aggregating knowledge, and the error tolerance in regard to communication failure in the single server framework would be handled. We therefore propose a novel framework that leverages the deployment of multiple global servers. We conducted a series of experiments using a dataset containing the event history of electric vehicle (EV) charging at numerous stations. We deployed a federated learning setup with multiple global servers and client servers, where each client-server strategically represented a different region and a global server was responsible for aggregating local updates from those devices. Our preliminary results of the global models demonstrate that the difference in performance attributed to multiple servers is less than 1%. While the hypothesis of enhanced model efficiency was not as expected, the rule for handling communication challenges added to the algorithm could resolve the error tolerance issue. Future research can focus on identifying specific uses for the deployment of multiple global servers.
- B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics, pp. 1273–1282, PMLR, 2017.
- S. Niknam, H. S. Dhillon, and J. H. Reed, “Federated learning for wireless communications: Motivation, opportunities, and challenges,” IEEE Communications Magazine, vol. 58, no. 6, pp. 46–51, 2020.
- K. Xiang and Y. Zeng, “Edge-driven on-line iot data search based on knowledge graph convolutional networks,” Available at SSRN 4330541.
- M. Bharadwaj and S. Sarda, “Energy prediction using federated learning,” arXiv preprint arXiv:2301.09165, 2023.
- L. Liu, J. Zhang, S. Song, and K. B. Letaief, “Client-edge-cloud hierarchical federated learning,” in ICC 2020-2020 IEEE International Conference on Communications (ICC), pp. 1–6, IEEE, 2020.
- D. Gao, X. Yao, and Q. Yang, “A survey on heterogeneous federated learning,” arXiv preprint arXiv:2210.04505, 2022.
- D. Jhunjhunwala, S. Wang, and G. Joshi, “Fedexp: Speeding up federated averaging via extrapolation,” arXiv preprint arXiv:2301.09604, 2023.
- S. Reddi, Z. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Konečnỳ, S. Kumar, and H. B. McMahan, “Adaptive federated optimization,” arXiv preprint arXiv:2003.00295, 2020.
- Q. Li, Z. Wen, Z. Wu, S. Hu, N. Wang, Y. Li, X. Liu, and B. He, “A survey on federated learning systems: vision, hype and reality for data privacy and protection,” IEEE Transactions on Knowledge and Data Engineering, 2021.
- A. Brecko, E. Kajati, J. Koziorek, and I. Zolotova, “Federated learning for edge computing: A survey,” Applied Sciences, vol. 12, no. 18, p. 9124, 2022.
- J. C. Jiang, B. Kantarci, S. Oktug, and T. Soyata, “Federated learning in smart city sensing: Challenges and opportunities,” Sensors, vol. 20, no. 21, p. 6230, 2020.
- L. Baresi, G. Quattrocchi, and N. Rasi, “Open challenges in federated machine learning,” IEEE Internet Computing, 2022.
- J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv:1610.05492, 2016.
- F. Sattler, S. Wiedemann, K.-R. Müller, and W. Samek, “Robust and communication-efficient federated learning from non-iid data,” IEEE transactions on neural networks and learning systems, vol. 31, no. 9, pp. 3400–3413, 2019.
- F. Chen, M. Luo, Z. Dong, Z. Li, and X. He, “Federated meta-learning with fast convergence and efficient communication,” arXiv preprint arXiv:1802.07876, 2018.
- F. Yu, W. Zhang, Z. Qin, Z. Xu, D. Wang, C. Liu, Z. Tian, and X. Chen, “Heterogeneous federated learning,” arXiv preprint arXiv:2008.06767, 2020.
- Q. Li, Y. Diao, Q. Chen, and B. He, “Federated learning on non-iid data silos: An experimental study,” in 2022 IEEE 38th International Conference on Data Engineering (ICDE), pp. 965–978, IEEE, 2022.
- E. Sannara, F. Portet, P. Lalanda, and V. German, “A federated learning aggregation algorithm for pervasive computing: Evaluation and comparison,” in 2021 IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 1–10, IEEE, 2021.
- R. Richard, H. Cao, and M. Wachowicz, “Discovering ev recharging patterns through an automated analytical workflow,” in 2020 IEEE International Smart Cities Conference (ISC2), pp. 1–8, IEEE, 2020.
- R. Richard, H. Cao, and M. Wachowicz, “A spatial-temporal comparison of ev charging station clusters leveraging multiple validity indices,” in International Conference on Vehicle Technology and Intelligent Transport Systems, pp. 34–57, Springer, 2021.
- S. Ö. Arik and T. Pfister, “Tabnet: Attentive interpretable tabular learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 35, pp. 6679–6687, 2021.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.