REFT: Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments (2308.13662v2)
Abstract: Federated Learning (FL) plays a critical role in distributed systems. In these systems, data privacy and confidentiality hold paramount importance, particularly within edge-based data processing systems such as IoT devices deployed in smart homes. FL emerges as a privacy-enforcing sub-domain of machine learning that enables model training on client devices, eliminating the necessity to share private data with a central server. While existing research has predominantly addressed challenges pertaining to data heterogeneity, there remains a current gap in addressing issues such as varying device capabilities and efficient communication. These unaddressed issues raise a number of implications in resource-constrained environments. In particular, the practical implementation of FL-based IoT or edge systems is extremely inefficient. In this paper, we propose "Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments (REFT)," a novel approach specifically devised to address these challenges in resource-limited devices. Our proposed method uses Variable Pruning to optimize resource utilization by adapting pruning strategies to the computational capabilities of each client. Furthermore, our proposed REFT technique employs knowledge distillation to minimize the need for continuous bidirectional client-server communication. This achieves a significant reduction in communication bandwidth, thereby enhancing the overall resource efficiency. We conduct experiments for an image classification task, and the results demonstrate the effectiveness of our approach in resource-limited settings. Our technique not only preserves data privacy and performance standards but also accommodates heterogeneous model architectures, facilitating the participation of a broader array of diverse client devices in the training process, all while consuming minimal bandwidth.
- FedComm: Understanding Communication Protocols for Edge-based Federated Learning. arXiv:2208.08764 [cs.DC]
- Personalized Federated Learning: A Meta-Learning Approach. CoRR abs/2002.07948 (2020). arXiv:2002.07948 https://arxiv.org/abs/2002.07948
- Ensemble Attention Distillation for Privacy-Preserving Federated Learning. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). 15056–15066. https://doi.org/10.1109/ICCV48922.2021.01480
- Preserving Privacy in Federated Learning with Ensemble Cross-Domain Knowledge Distillation. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022. AAAI Press, 11891–11899. https://ojs.aaai.org/index.php/AAAI/article/view/21446
- Federated Learning with Privacy-Preserving Ensemble Attention Distillation. arXiv:2210.08464 [cs.LG]
- Generative Adversarial Networks. CoRR abs/1406.2661 (2014). arXiv:1406.2661 http://arxiv.org/abs/1406.2661
- One-Shot Federated Learning. CoRR abs/1902.11175 (2019). arXiv:1902.11175 http://arxiv.org/abs/1902.11175
- Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1510.00149
- Learning both Weights and Connections for Efficient Neural Network. In Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (Eds.), Vol. 28. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2015/file/ae0eb3eed39d2bcef4622b2499a05fe6-Paper.pdf
- Optimal Brain Surgeon: Extensions and performance comparisons. In Advances in Neural Information Processing Systems, J. Cowan, G. Tesauro, and J. Alspector (Eds.), Vol. 6. Morgan-Kaufmann. https://proceedings.neurips.cc/paper_files/paper/1993/file/b056eb1587586b71e2da9acfe4fbd19e-Paper.pdf
- Distilling the Knowledge in a Neural Network. arXiv:1503.02531 [stat.ML]
- Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification. CoRR abs/1909.06335 (2019). arXiv:1909.06335 http://arxiv.org/abs/1909.06335
- Federated Visual Classification with Real-World Data Distribution. CoRR abs/2003.08082 (2020). arXiv:2003.08082 https://arxiv.org/abs/2003.08082
- Model Pruning Enables Efficient Federated Learning on Edge Devices. CoRR abs/1909.12326 (2019). arXiv:1909.12326 http://arxiv.org/abs/1909.12326
- Federated Learning: Strategies for Improving Communication Efficiency. CoRR abs/1610.05492 (2016). arXiv:1610.05492 http://arxiv.org/abs/1610.05492
- Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images. (2009), 32–33. https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
- Optimal brain damage. In Advances in Neural Information Processing Systems (NIPS 1989), Denver, CO, David Touretzky (Ed.), Vol. 2. Morgan Kaufmann.
- Pruning Filters for Efficient ConvNets. In International Conference on Learning Representations. https://openreview.net/forum?id=rJqFGTslg
- Federated Learning: Challenges, Methods, and Future Directions. CoRR abs/1908.07873 (2019). arXiv:1908.07873 http://arxiv.org/abs/1908.07873
- Fair Resource Allocation in Federated Learning. arXiv:1905.10497 [cs.LG]
- Fair Resource Allocation in Federated Learning. CoRR abs/1905.10497 (2019). arXiv:1905.10497 http://arxiv.org/abs/1905.10497
- Ensemble Distillation for Robust Model Fusion in Federated Learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (Eds.). https://proceedings.neurips.cc/paper/2020/hash/18df51b97ccd68128e994804f3eccc87-Abstract.html
- Communication-Efficient Learning of Deep Networks from Decentralized Data. arXiv:1602.05629 [cs.LG]
- Paul Micaelli and Amos J. Storkey. 2019. Zero-shot Knowledge Transfer via Adversarial Belief Matching. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 9547–9557. https://proceedings.neurips.cc/paper/2019/hash/fe663a72b27bdc613873fbbb512f6f67-Abstract.html
- Microsoft NNI Contributors. 2023. Overview of NNI Model Pruning; Neural Network Intelligence — nni.readthedocs.io. https://nni.readthedocs.io/en/stable/compression/pruning.html#dependency-aware-mode-for-output-channel-pruning [Accessed 14-08-2023].
- Agnostic Federated Learning. arXiv:1902.00146 [cs.LG]
- Zero-Shot Knowledge Distillation in Deep Networks. CoRR abs/1905.08114 (2019). arXiv:1905.08114 http://arxiv.org/abs/1905.08114
- FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Vol. 108), Silvia Chiappa and Roberto Calandra (Eds.). PMLR, 2021–2031. https://proceedings.mlr.press/v108/reisizadeh20a.html
- FitNets: Hints for Thin Deep Nets. arXiv:1412.6550 [cs.LG]
- Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data. IEEE Transactions on Neural Networks and Learning Systems 31, 9 (2020), 3400–3413. https://doi.org/10.1109/TNNLS.2019.2944481
- Play and Prune: Adaptive Filter Pruning for Deep Model Compression. CoRR abs/1905.04446 (2019). arXiv:1905.04446 http://arxiv.org/abs/1905.04446
- FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks. CoRR abs/1811.03060 (2018). arXiv:1811.03060 http://arxiv.org/abs/1811.03060
- An Experimental Analysis of the Power Consumption of Convolutional Neural Networks for Keyword Spotting. CoRR abs/1711.00333 (2017). arXiv:1711.00333 http://arxiv.org/abs/1711.00333
- Frederick Tung and Greg Mori. 2019. Similarity-Preserving Knowledge Distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
- S Vineeth. 2021. Federated learning over WiFi: Should we use TCP or UDP? https://doi.org/10.31219/osf.io/tuz6c
- Federated Learning with Matched Averaging. arXiv:2002.06440 [cs.LG]
- Non-structured DNN Weight Pruning Considered Harmful. CoRR abs/1907.02124 (2019). arXiv:1907.02124 http://arxiv.org/abs/1907.02124
- Accelerating Federated Learning for IoT in Big Data Analytics With Pruning, Quantization and Selective Updating. IEEE Access 9 (2021), 38457–38466. https://doi.org/10.1109/ACCESS.2021.3063291
- Energy Efficient Federated Learning Over Wireless Communication Networks. IEEE Transactions on Wireless Communications 20, 3 (2021), 1935–1949. https://doi.org/10.1109/TWC.2020.3037554
- Bayesian Nonparametric Federated Learning of Neural Networks. arXiv:1905.12022 [stat.ML]
- Sergey Zagoruyko and Nikos Komodakis. 2016. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer. CoRR abs/1612.03928 (2016). arXiv:1612.03928 http://arxiv.org/abs/1612.03928
- Humaid Ahmed Desai (1 paper)
- Amr Hilal (2 papers)
- Hoda Eldardiry (31 papers)