Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Advancing IIoT with Over-the-Air Federated Learning: The Role of Iterative Magnitude Pruning (2403.14120v1)

Published 21 Mar 2024 in cs.LG, cs.AI, and eess.SP

Abstract: The industrial Internet of Things (IIoT) under Industry 4.0 heralds an era of interconnected smart devices where data-driven insights and ML fuse to revolutionize manufacturing. A noteworthy development in IIoT is the integration of federated learning (FL), which addresses data privacy and security among devices. FL enables edge sensors, also known as peripheral intelligence units (PIUs) to learn and adapt using their data locally, without explicit sharing of confidential data, to facilitate a collaborative yet confidential learning process. However, the lower memory footprint and computational power of PIUs inherently require deep neural network (DNN) models that have a very compact size. Model compression techniques such as pruning can be used to reduce the size of DNN models by removing unnecessary connections that have little impact on the model's performance, thus making the models more suitable for the limited resources of PIUs. Targeting the notion of compact yet robust DNN models, we propose the integration of iterative magnitude pruning (IMP) of the DNN model being trained in an over-the-air FL (OTA-FL) environment for IIoT. We provide a tutorial overview and also present a case study of the effectiveness of IMP in OTA-FL for an IIoT environment. Finally, we present future directions for enhancing and optimizing these deep compression techniques further, aiming to push the boundaries of IIoT capabilities in acquiring compact yet robust and high-performing DNN models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. Z. Wan, Z. Gao, M. Di Renzo, and L. Hanzo, “The road to industry 4.0 and beyond: A communications-, information-, and operation technology collaboration perspective,” IEEE Network, vol. 36, no. 6, pp. 157–164, 2022.
  2. G. Zheng, Q. Ni, K. Navaie, H. Pervaiz, G. Min, A. Kaushik, and C. Zarakovitis, “Mobility-aware split-federated with transfer learning for vehicular semantic communication networks,” IEEE Internet of Things Journal, pp. 1–1, 2024.
  3. X. Cao, T. Başar, S. Diggavi, Y. C. Eldar, K. B. Letaief, H. V. Poor, and J. Zhang, “Communication-efficient distributed learning: An overview,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 4, pp. 851–873, 2023.
  4. M. Li, T. Zhang, Y. Chen, and A. J. Smola, “Efficient mini-batch training for stochastic optimization,” in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 661–670, 2014.
  5. Z. Chen, E. G. Larsson, C. Fischione, M. Johansson, and Y. Malitsky, “Over-the-air computation for distributed systems: Something old and something new,” IEEE Network, pp. 1–7, 2023.
  6. F. M. A. Khan, H. Abou-Zeid, and S. A. Hassan, “Deep compression for efficient and accelerated over-the-air federated learning,” IEEE Internet of Things Journal, pp. 1–1, 2024.
  7. Z. Li, H. Li, and L. Meng, “Model compression for deep neural networks: A survey,” Computers, vol. 12, no. 3, p. 60, 2023.
  8. Q. Zhou, Z. Qu, S. Guo, B. Luo, J. Guo, Z. Xu, and R. Akerkar, “On-device learning systems for edge intelligence: A software and hardware synergy perspective,” IEEE Internet of Things Journal, vol. 8, no. 15, pp. 11 916–11 934, 2021.
  9. S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” Advances in neural information processing systems, vol. 28, 2015.
  10. J. Frankle and M. Carbin, “The lottery ticket hypothesis: Finding sparse, trainable neural networks.” in ICLR, 2019. [Online]. Available: http://dblp.uni-trier.de/db/conf/iclr/iclr2019.html#FrankleC19
  11. C. M. J. Tan and M. Motani, “DropNet: Reducing neural network complexity via iterative pruning,” in Proceedings of the 37th International Conference on Machine Learning, vol. 119, pp. 9356–9366, 2020.
  12. I. Ahmed, G. Jeon, and F. Piccialli, “From artificial intelligence to explainable artificial intelligence in industry 4.0: A survey on what, how, and where,” IEEE Transactions on Industrial Informatics, vol. 18, no. 8, pp. 5031–5042, 2022.
  13. L. Geng, H. Zhao, J. Wang, A. Kaushik, S. Yuan, and W. Feng, “Deep-reinforcement-learning-based distributed computation offloading in vehicular edge computing networks,” IEEE Internet of Things Journal, vol. 10, no. 14, pp. 12 416–12 433, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Fazal Muhammad Ali Khan (1 paper)
  2. Hatem Abou-Zeid (26 papers)
  3. Aryan Kaushik (41 papers)
  4. Syed Ali Hassan (32 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.