Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Experimental Study of Different Aggregation Schemes in Semi-Asynchronous Federated Learning (2405.16086v1)

Published 25 May 2024 in cs.DC and cs.PF

Abstract: Federated learning is highly valued due to its high-performance computing in distributed environments while safeguarding data privacy. To address resource heterogeneity, researchers have proposed a semi-asynchronous federated learning (SAFL) architecture. However, the performance gap between different aggregation targets in SAFL remain unexplored. In this paper, we systematically compare the performance between two algorithm modes, FedSGD and FedAvg that correspond to aggregating gradients and models, respectively. Our results across various task scenarios indicate these two modes exhibit a substantial performance gap. Specifically, FedSGD achieves higher accuracy and faster convergence but experiences more severe fluctuates in accuracy, whereas FedAvg excels in handling straggler issues but converges slower with reduced accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
  2. On the convergence of fedavg on non-iid data. arXiv preprint arXiv:1907.02189, 2019.
  3. Fedbn: Federated learning on non-iid features via local batch normalization. arXiv preprint arXiv:2102.07623, 2021.
  4. Federated bandit: A gossiping approach. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 5(1):1–29, 2021.
  5. Lightfed: An efficient and secure federated edge learning system on model splitting. IEEE Transactions on Parallel and Distributed Systems, 33(11):2701–2713, 2021.
  6. Distributed pruning towards tiny neural networks in federated learning. In 2023 IEEE 43rd International Conference on Distributed Computing Systems (ICDCS), pages 190–201. IEEE, 2023.
  7. Hwamei: A learning-based synchronization scheme for hierarchical federated learning. In 2023 IEEE 43rd International Conference on Distributed Computing Systems (ICDCS), pages 534–544. IEEE, 2023.
  8. Ibm federated learning: an enterprise framework white paper v0. 1. arXiv preprint arXiv:2007.10987, 2020.
  9. Federated learning for open banking. In Federated Learning: Privacy and Incentive, pages 240–254. Springer, 2020.
  10. Fedhome: Cloud-edge based personalized federated learning for in-home health monitoring. IEEE Transactions on Mobile Computing, 21(8):2818–2832, 2020.
  11. Utility-preserving federated learning. In Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, pages 55–65, 2023.
  12. Federated learning with non-iid data. arXiv preprint arXiv:1806.00582, 2018.
  13. Robust and communication-efficient federated learning from non-iid data. IEEE transactions on neural networks and learning systems, 31(9):3400–3413, 2019.
  14. Safa: A semi-asynchronous protocol for fast federated learning with low overhead. IEEE Transactions on Computers, 70(5):655–668, 2020.
  15. Fedsa: A semi-asynchronous federated learning mechanism in heterogeneous edge computing. IEEE Journal on Selected Areas in Communications, 39(12):3654–3672, 2021.
  16. Aafl: Asynchronous-adaptive federated learning in edge-based wireless communication systems for countering communicable infectious diseasess. IEEE Journal on Selected Areas in Communications, 40(11):3172–3190, 2022.
  17. Asynchronous federated optimization. arXiv preprint arXiv:1903.03934, 2019.
  18. Online client selection for asynchronous federated learning with fairness consideration. IEEE Transactions on Wireless Communications, 22(4):2493–2506, 2022.
  19. Afafed—asynchronous fair adaptive federated learning for iot stream applications. Computer Communications, 195:376–402, 2022.
  20. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE symposium on security and privacy (SP), pages 691–706. IEEE, 2019.
  21. Deep leakage from gradients. Advances in neural information processing systems, 32, 2019.
  22. User-level label leakage from gradients in federated learning. arXiv preprint arXiv:2105.09369, 2021.
  23. Fedeval: A holistic evaluation framework for federated learning. arXiv preprint arXiv:2011.09655, 2020.
  24. Learning multiple layers of features from tiny images. 2009.
  25. Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097, 2018.
  26. Emnist: Extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN), pages 2921–2926. IEEE, 2017.
  27. Twitter sentiment classification using distant supervision. CS224N project report, Stanford, 1(12):2009, 2009.
  28. Fedlab: A flexible federated learning framework. Journal of Machine Learning Research, 24(100):1–7, 2023.
  29. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  30. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  31. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  32. Adaptive federated learning in resource constrained edge computing systems. IEEE journal on selected areas in communications, 37(6):1205–1221, 2019.
  33. An adaptive chaotic sparrow search optimization algorithm. In 2021 IEEE 2nd international conference on Big data, artificial intelligence and internet of things engineering (ICBAIE), pages 76–82. IEEE, 2021.
  34. An experimental study of class imbalance in federated learning. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI), pages 1–7. IEEE, 2021.
  35. Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th international conference on data engineering (ICDE), pages 965–978. IEEE, 2022.
  36. Empirical analysis of federated learning in heterogeneous environments. In Proceedings of the 2nd European Workshop on Machine Learning and Systems, pages 1–9, 2022.
  37. Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges. Information Fusion, 90:148–173, 2023.
  38. Optimize scheduling of federated learning on battery-powered mobile devices. In 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 212–221. IEEE, 2020.
  39. Fedat: A communication-efficient federated learning method with asynchronous tiers under non-iid data. ArXivorg, 2020.
  40. Asynchronous online federated learning for edge devices with non-iid data. In 2020 IEEE International Conference on Big Data (Big Data), pages 15–24. IEEE, 2020.
  41. Scheduling and aggregation design for asynchronous federated learning over wireless networks. IEEE Journal on Selected Areas in Communications, 41(4):874–886, 2023.
  42. Fedmds: An efficient model discrepancy-aware semi-asynchronous clustered federated learning framework. IEEE Transactions on Parallel and Distributed Systems, 34(3):1007–1019, 2023.
  43. Kafl: Achieving high training efficiency for fast-k asynchronous federated learning. In 2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS), pages 873–883. IEEE, 2022.
  44. Towards efficient and stable k-asynchronous federated learning with unbounded stale gradients on non-iid data. IEEE Transactions on Parallel and Distributed Systems, 33(12):3291–3305, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yunbo Li (5 papers)
  2. Jiaping Gui (7 papers)
  3. Yue Wu (339 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com