Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

REFT: Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments (2308.13662v2)

Published 25 Aug 2023 in cs.LG and cs.DC

Abstract: Federated Learning (FL) plays a critical role in distributed systems. In these systems, data privacy and confidentiality hold paramount importance, particularly within edge-based data processing systems such as IoT devices deployed in smart homes. FL emerges as a privacy-enforcing sub-domain of machine learning that enables model training on client devices, eliminating the necessity to share private data with a central server. While existing research has predominantly addressed challenges pertaining to data heterogeneity, there remains a current gap in addressing issues such as varying device capabilities and efficient communication. These unaddressed issues raise a number of implications in resource-constrained environments. In particular, the practical implementation of FL-based IoT or edge systems is extremely inefficient. In this paper, we propose "Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments (REFT)," a novel approach specifically devised to address these challenges in resource-limited devices. Our proposed method uses Variable Pruning to optimize resource utilization by adapting pruning strategies to the computational capabilities of each client. Furthermore, our proposed REFT technique employs knowledge distillation to minimize the need for continuous bidirectional client-server communication. This achieves a significant reduction in communication bandwidth, thereby enhancing the overall resource efficiency. We conduct experiments for an image classification task, and the results demonstrate the effectiveness of our approach in resource-limited settings. Our technique not only preserves data privacy and performance standards but also accommodates heterogeneous model architectures, facilitating the participation of a broader array of diverse client devices in the training process, all while consuming minimal bandwidth.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. FedComm: Understanding Communication Protocols for Edge-based Federated Learning. arXiv:2208.08764 [cs.DC]
  2. Personalized Federated Learning: A Meta-Learning Approach. CoRR abs/2002.07948 (2020). arXiv:2002.07948 https://arxiv.org/abs/2002.07948
  3. Ensemble Attention Distillation for Privacy-Preserving Federated Learning. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). 15056–15066. https://doi.org/10.1109/ICCV48922.2021.01480
  4. Preserving Privacy in Federated Learning with Ensemble Cross-Domain Knowledge Distillation. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022. AAAI Press, 11891–11899. https://ojs.aaai.org/index.php/AAAI/article/view/21446
  5. Federated Learning with Privacy-Preserving Ensemble Attention Distillation. arXiv:2210.08464 [cs.LG]
  6. Generative Adversarial Networks. CoRR abs/1406.2661 (2014). arXiv:1406.2661 http://arxiv.org/abs/1406.2661
  7. One-Shot Federated Learning. CoRR abs/1902.11175 (2019). arXiv:1902.11175 http://arxiv.org/abs/1902.11175
  8. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1510.00149
  9. Learning both Weights and Connections for Efficient Neural Network. In Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (Eds.), Vol. 28. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2015/file/ae0eb3eed39d2bcef4622b2499a05fe6-Paper.pdf
  10. Optimal Brain Surgeon: Extensions and performance comparisons. In Advances in Neural Information Processing Systems, J. Cowan, G. Tesauro, and J. Alspector (Eds.), Vol. 6. Morgan-Kaufmann. https://proceedings.neurips.cc/paper_files/paper/1993/file/b056eb1587586b71e2da9acfe4fbd19e-Paper.pdf
  11. Distilling the Knowledge in a Neural Network. arXiv:1503.02531 [stat.ML]
  12. Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification. CoRR abs/1909.06335 (2019). arXiv:1909.06335 http://arxiv.org/abs/1909.06335
  13. Federated Visual Classification with Real-World Data Distribution. CoRR abs/2003.08082 (2020). arXiv:2003.08082 https://arxiv.org/abs/2003.08082
  14. Model Pruning Enables Efficient Federated Learning on Edge Devices. CoRR abs/1909.12326 (2019). arXiv:1909.12326 http://arxiv.org/abs/1909.12326
  15. Federated Learning: Strategies for Improving Communication Efficiency. CoRR abs/1610.05492 (2016). arXiv:1610.05492 http://arxiv.org/abs/1610.05492
  16. Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images. (2009), 32–33. https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
  17. Optimal brain damage. In Advances in Neural Information Processing Systems (NIPS 1989), Denver, CO, David Touretzky (Ed.), Vol. 2. Morgan Kaufmann.
  18. Pruning Filters for Efficient ConvNets. In International Conference on Learning Representations. https://openreview.net/forum?id=rJqFGTslg
  19. Federated Learning: Challenges, Methods, and Future Directions. CoRR abs/1908.07873 (2019). arXiv:1908.07873 http://arxiv.org/abs/1908.07873
  20. Fair Resource Allocation in Federated Learning. arXiv:1905.10497 [cs.LG]
  21. Fair Resource Allocation in Federated Learning. CoRR abs/1905.10497 (2019). arXiv:1905.10497 http://arxiv.org/abs/1905.10497
  22. Ensemble Distillation for Robust Model Fusion in Federated Learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (Eds.). https://proceedings.neurips.cc/paper/2020/hash/18df51b97ccd68128e994804f3eccc87-Abstract.html
  23. Communication-Efficient Learning of Deep Networks from Decentralized Data. arXiv:1602.05629 [cs.LG]
  24. Paul Micaelli and Amos J. Storkey. 2019. Zero-shot Knowledge Transfer via Adversarial Belief Matching. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 9547–9557. https://proceedings.neurips.cc/paper/2019/hash/fe663a72b27bdc613873fbbb512f6f67-Abstract.html
  25. Microsoft NNI Contributors. 2023. Overview of NNI Model Pruning; Neural Network Intelligence — nni.readthedocs.io. https://nni.readthedocs.io/en/stable/compression/pruning.html#dependency-aware-mode-for-output-channel-pruning [Accessed 14-08-2023].
  26. Agnostic Federated Learning. arXiv:1902.00146 [cs.LG]
  27. Zero-Shot Knowledge Distillation in Deep Networks. CoRR abs/1905.08114 (2019). arXiv:1905.08114 http://arxiv.org/abs/1905.08114
  28. FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Vol. 108), Silvia Chiappa and Roberto Calandra (Eds.). PMLR, 2021–2031. https://proceedings.mlr.press/v108/reisizadeh20a.html
  29. FitNets: Hints for Thin Deep Nets. arXiv:1412.6550 [cs.LG]
  30. Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data. IEEE Transactions on Neural Networks and Learning Systems 31, 9 (2020), 3400–3413. https://doi.org/10.1109/TNNLS.2019.2944481
  31. Play and Prune: Adaptive Filter Pruning for Deep Model Compression. CoRR abs/1905.04446 (2019). arXiv:1905.04446 http://arxiv.org/abs/1905.04446
  32. FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks. CoRR abs/1811.03060 (2018). arXiv:1811.03060 http://arxiv.org/abs/1811.03060
  33. An Experimental Analysis of the Power Consumption of Convolutional Neural Networks for Keyword Spotting. CoRR abs/1711.00333 (2017). arXiv:1711.00333 http://arxiv.org/abs/1711.00333
  34. Frederick Tung and Greg Mori. 2019. Similarity-Preserving Knowledge Distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
  35. S Vineeth. 2021. Federated learning over WiFi: Should we use TCP or UDP? https://doi.org/10.31219/osf.io/tuz6c
  36. Federated Learning with Matched Averaging. arXiv:2002.06440 [cs.LG]
  37. Non-structured DNN Weight Pruning Considered Harmful. CoRR abs/1907.02124 (2019). arXiv:1907.02124 http://arxiv.org/abs/1907.02124
  38. Accelerating Federated Learning for IoT in Big Data Analytics With Pruning, Quantization and Selective Updating. IEEE Access 9 (2021), 38457–38466. https://doi.org/10.1109/ACCESS.2021.3063291
  39. Energy Efficient Federated Learning Over Wireless Communication Networks. IEEE Transactions on Wireless Communications 20, 3 (2021), 1935–1949. https://doi.org/10.1109/TWC.2020.3037554
  40. Bayesian Nonparametric Federated Learning of Neural Networks. arXiv:1905.12022 [stat.ML]
  41. Sergey Zagoruyko and Nikos Komodakis. 2016. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer. CoRR abs/1612.03928 (2016). arXiv:1612.03928 http://arxiv.org/abs/1612.03928
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Humaid Ahmed Desai (1 paper)
  2. Amr Hilal (2 papers)
  3. Hoda Eldardiry (31 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets