Mitigating System Bias in Resource Constrained Asynchronous Federated Learning Systems (2401.13366v2)
Abstract: Federated learning (FL) systems face performance challenges in dealing with heterogeneous devices and non-identically distributed data across clients. We propose a dynamic global model aggregation method within Asynchronous Federated Learning (AFL) deployments to address these issues. Our aggregation method scores and adjusts the weighting of client model updates based on their upload frequency to accommodate differences in device capabilities. Additionally, we also immediately provide an updated global model to clients after they upload their local models to reduce idle time and improve training efficiency. We evaluate our approach within an AFL deployment consisting of 10 simulated clients with heterogeneous compute constraints and non-IID data. The simulation results, using the FashionMNIST dataset, demonstrate over 10% and 19% improvement in global model accuracy compared to state-of-the-art methods PAPAYA and FedAsync, respectively. Our dynamic aggregation method allows reliable global model training despite limiting client resources and statistical data heterogeneity. This improves robustness and scalability for real-world FL deployments.
- B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282.
- F. Naeem, M. Tariq, and H. V. Poor, “Sdn-enabled energy-efficient routing optimization framework for industrial internet of things,” IEEE Transactions on Industrial Informatics, vol. 17, no. 8, pp. 5660–5667, 2020.
- L. Ferraguig, Y. Djebrouni, S. Bouchenak, and V. Marangozova, “Survey of bias mitigation in federated learning,” in Conférence francophone d’informatique en Parallélisme, Architecture et Système, 2021.
- J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv:1610.05492, 2016.
- Y. Li, S. Yang, X. Ren, and C. Zhao, “Asynchronous federated learning with differential privacy for edge intelligence,” arXiv preprint arXiv:1912.07902, 2019.
- E. Diao, J. Ding, and V. Tarokh, “Heterofl: Computation and communication efficient federated learning for heterogeneous clients,” arXiv preprint arXiv:2010.01264, 2020.
- C. Xu, Y. Qu, Y. Xiang, and L. Gao, “Asynchronous federated learning on heterogeneous devices: A survey,” arXiv preprint arXiv:2109.04269, 2021.
- T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020.
- M. Li, D. G. Andersen, A. J. Smola, and K. Yu, “Communication efficient distributed machine learning with the parameter server,” Advances in Neural Information Processing Systems, vol. 27, 2014.
- Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-iid data,” arXiv preprint arXiv:1806.00582, 2018.
- D. J. Beutel, T. Topal, A. Mathur, X. Qiu, J. Fernandez-Marques, Y. Gao, L. Sani, K. H. Li, T. Parcollet, P. P. B. de Gusmão et al., “Flower: A friendly federated learning research framework,” arXiv preprint arXiv:2007.14390, 2020.
- C. Xie, S. Koyejo, and I. Gupta, “Asynchronous federated optimization,” arXiv preprint arXiv:1903.03934, 2019.
- M. Chen, B. Mao, and T. Ma, “Fedsa: A staleness-aware asynchronous federated learning algorithm with non-iid data,” Future Generation Computer Systems, vol. 120, pp. 1–12, 2021.
- Z. Chen, W. Liao, K. Hua, C. Lu, and W. Yu, “Towards asynchronous federated learning for heterogeneous edge-powered internet of things,” Digital Communications and Networks, vol. 7, no. 3, pp. 317–326, 2021.
- M. Chen, B. Mao, and T. Ma, “Efficient and robust asynchronous federated learning with stragglers,” in International Conference on Learning Representations, 2019.
- D. Huba, J. Nguyen, K. Malik, R. Zhu, M. Rabbat, A. Yousefpour, C.-J. Wu, H. Zhan, P. Ustinov, H. Srinivas et al., “Papaya: Practical, private, and scalable federated learning,” Proceedings of Machine Learning and Systems, vol. 4, pp. 814–832, 2022.
- J. Nguyen, K. Malik, H. Zhan, A. Yousefpour, M. Rabbat, M. Malek, and D. Huba, “Federated learning with buffered asynchronous aggregation,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2022, pp. 3581–3607.
- H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
- Jikun Gao (1 paper)
- Ioannis Mavromatis (40 papers)
- Peizheng Li (34 papers)
- Pietro Carnelli (13 papers)
- Aftab Khan (29 papers)