Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Model Compression for Hierarchical Federated Learning (2405.17522v1)

Published 27 May 2024 in cs.LG and cs.DC

Abstract: Federated learning (FL), as an emerging collaborative learning paradigm, has garnered significant attention due to its capacity to preserve privacy within distributed learning systems. In these systems, clients collaboratively train a unified neural network model using their local datasets and share model parameters rather than raw data, enhancing privacy. Predominantly, FL systems are designed for mobile and edge computing environments where training typically occurs over wireless networks. Consequently, as model sizes increase, the conventional FL frameworks increasingly consume substantial communication resources. To address this challenge and improve communication efficiency, this paper introduces a novel hierarchical FL framework that integrates the benefits of clustered FL and model compression. We present an adaptive clustering algorithm that identifies a core client and dynamically organizes clients into clusters. Furthermore, to enhance transmission efficiency, each core client implements a local aggregation with compression (LC aggregation) algorithm after collecting compressed models from other clients within the same cluster. Simulation results affirm that our proposed algorithms not only maintain comparable predictive accuracy but also significantly reduce energy consumption relative to existing FL mechanisms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. Deep learning. nature, 521(7553):436–444, 2015.
  2. Federated learning for vehicular internet of things: Recent advances and open issues. IEEE Open Journal of the Computer Society, 1:45–61, 2020.
  3. A four-stage stochastic framework for managing electricity market by participating smart buildings and electric vehicles: Towards smart cities with active end-users. Sustainable Cities and Society, 93:104535, 2023.
  4. Secure hierarchical fog computing-based architecture for industry 5.0 using an attribute-based encryption scheme. Expert Systems with Applications, 235:121180, 2024.
  5. Federated learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 13(3):1–207, 2019.
  6. Lightweight federated learning for large-scale iot devices with privacy guarantee. IEEE Internet of Things Journal, 2021.
  7. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
  8. An improved cnn-lstm model compression pruning algorithm. In Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery: Proceedings of the ICNC-FSKD 2021 17, pages 727–736. Springer, 2022.
  9. High-dimensional stochastic gradient quantization for communication-efficient edge learning. IEEE Transactions on Signal Processing, 68:2128–2142, 2020.
  10. Distilling the knowledge in a neural network (2015). arXiv preprint arXiv:1503.02531, 2, 2015.
  11. Towards efficient tensor decomposition-based dnn model compression with optimization framework. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10674–10683, 2021.
  12. Fog computing assisted efficient privacy preserving data collection for big sensory data. In 2018 IEEE Global Communications Conference (GLOBECOM), pages 1–6. IEEE, 2018.
  13. Federated learning of predictive models from federated electronic health records. International journal of medical informatics, 112:59–67, 2018.
  14. Fedgraph: Federated graph learning with intelligent sampling. IEEE Transactions on Parallel and Distributed Systems, 33(8):1775–1786, 2021.
  15. An efficient framework for clustered federated learning. Advances in Neural Information Processing Systems, 33:19586–19597, 2020.
  16. Fedboost: A communication-efficient algorithm for federated learning. In International Conference on Machine Learning, pages 3973–3983. PMLR, 2020.
  17. Efficient multi-layer stochastic gradient descent algorithm for federated learning in e-health. In ICC 2022-IEEE International Conference on Communications, pages 1263–1268. IEEE, 2022.
  18. Exploiting shared representations for personalized federated learning. In International Conference on Machine Learning, pages 2089–2099. PMLR, 2021.
  19. Model compression and privacy preserving framework for federated learning. Future Generation Computer Systems, 140:376–389, 2023.
  20. Clustergrad: Adaptive gradient compression by clustering in federated learning. In GLOBECOM 2020-2020 IEEE Global Communications Conference, pages 1–7. IEEE, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xi Zhu (35 papers)
  2. Songcan Yu (1 paper)
  3. Junbo Wang (16 papers)
  4. Qinglin Yang (25 papers)