Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Communication-Efficient Model Aggregation with Layer Divergence Feedback in Federated Learning (2404.08324v1)

Published 12 Apr 2024 in cs.DC

Abstract: Federated Learning (FL) facilitates collaborative machine learning by training models on local datasets, and subsequently aggregating these local models at a central server. However, the frequent exchange of model parameters between clients and the central server can result in significant communication overhead during the FL training process. To solve this problem, this paper proposes a novel FL framework, the Model Aggregation with Layer Divergence Feedback mechanism (FedLDF). Specifically, we calculate model divergence between the local model and the global model from the previous round. Then through model layer divergence feedback, the distinct layers of each client are uploaded and the amount of data transferred is reduced effectively. Moreover, the convergence bound reveals that the access ratio of clients has a positive correlation with model performance. Simulation results show that our algorithm uploads local models with reduced communication overhead while upholding a superior global model performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. Int. Conf. Artif. Intell. Stat., AISTATS, 2017, pp. 1273–1282.
  2. X. Han, J. Li, W. Chen, Z. Mei, K. Wei, M. Ding, and H. V. Poor, “Analysis and optimization of wireless federated learning with data heterogeneity,” IEEE Trans. Wireless Commun., Dec. 2023.
  3. N. Hyeon-Woo, M. Ye-Bin, and T.-H. Oh, “Fedpara: Low-rank hadamard product for communication-efficient federated learning,” arXiv preprint arXiv:2108.06098, 2021.
  4. X. Han, W. Chen, J. Li, M. Ding, Q. Wu, K. Wei, X. Deng, and Z. Mei, “Energy-efficient wireless federated learning via doubly adaptive quantization,” arXiv preprint arXiv:2402.12957, 2024.
  5. S. Yu, P. Nguyen, A. Anwar, and A. Jannesari, “Heterogeneous federated learning using dynamic model pruning and adaptive gradient,” in Proc. - IEEE/ACM Int. Symp. Clust., Cloud Internet Comput., CCGrid, 2023, pp. 322–330.
  6. H. Liu, Y. Shi, Z. Su, K. Zhang, X. Wang, Z. Yan, and F. Kong, “Fedadp: Communication-efficient by model pruning for federated learning,” in IEEE Global Commun. Conf. GLOBECOM.   IEEE, 2023, pp. 3093–3098.
  7. S. Zawad, A. Anwar, Y. Zhou, N. Baracaldo, and F. Yan, “Hdfl: A heterogeneity and client dropout-aware federated learning framework,” in Proc. - IEEE/ACM Int. Symp.Clust., Cloud Internet Comput., CCGrid.   IEEE, 2023, pp. 311–321.
  8. G. Cheng, Z. Charles, Z. Garrett, and K. Rush, “Does federated dropout actually work?” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2022, pp. 3387–3395.
  9. P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings et al., “Advances and open problems in federated learning,” Found. Trends Mach. Learn., vol. 14, no. 1–2, pp. 1–210, 2021.
  10. B. Tao, C. Chen, and H. Chen, “Communication efficient federated learning via channel-wise dynamic pruning,” in IEEE Wireless Commun. Networking Conf. WCNC.   IEEE, 2023, pp. 1–6.
  11. Y. A. U. Rehman, Y. Gao, P. P. B. de Gusmao, M. Alibeigi, J. Shen, and N. D. Lane, “L-dawa: Layer-wise divergence aware weight aggregation in federated self-supervised visual representation learning,” in Proc. IEEE Int. Conf. Comput. Vis., 2023, pp. 16 464–16 473.
  12. X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the convergence of fedavg on non-iid data,” arXiv preprint arXiv:1907.02189, 2019.
  13. M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor, and S. Cui, “A joint learning and communications framework for federated learning over wireless networks,” IEEE Trans. Wireless Commun., vol. 20, no. 1, pp. 269–283, Oct. 2021.
  14. S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “Adaptive federated learning in resource constrained edge computing systems,” IEEE J. Sel. Areas Commun., vol. 37, no. 6, pp. 1205–1221, Mar. 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Liwei Wang (239 papers)
  2. Jun Li (778 papers)
  3. Wen Chen (318 papers)
  4. Qingqing Wu (262 papers)
  5. Ming Ding (219 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com