Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Safe Deep Reinforcement Learning Approach for Energy Efficient Federated Learning in Wireless Communication Networks (2308.10664v3)

Published 21 Aug 2023 in cs.LG and cs.AI

Abstract: Progressing towards a new era of AI - enabled wireless networks, concerns regarding the environmental impact of AI have been raised both in industry and academia. Federated Learning (FL) has emerged as a key privacy preserving decentralized AI technique. Despite efforts currently being made in FL, its environmental impact is still an open problem. Targeting the minimization of the overall energy consumption of an FL process, we propose the orchestration of computational and communication resources of the involved devices to minimize the total energy required, while guaranteeing a certain performance of the model. To this end, we propose a Soft Actor Critic Deep Reinforcement Learning (DRL) solution, where a penalty function is introduced during training, penalizing the strategies that violate the constraints of the environment, and contributing towards a safe RL process. A device level synchronization method, along with a computationally cost effective FL environment are proposed, with the goal of further reducing the energy consumption and communication overhead. Evaluation results show the effectiveness and robustness of the proposed scheme compared to four state-of-the-art baseline solutions on different network environments and FL architectures, achieving a decrease of up to 94% in the total energy consumption.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. S. Lange, J. Pohl, and T. Santarius, “Digitalization and energy consumption. does ict reduce energy demand?” Ecological Economics, vol. 176, p. 106760, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0921800919320622
  2. A. S. G. Andrae and T. Edler, “On global electricity usage of communication technology: Trends to 2030,” Challenges, vol. 6, no. 1, pp. 117–157, 2015. [Online]. Available: https://www.mdpi.com/2078-1547/6/1/117
  3. Mobile net zero: State of the industry on climate action 2022 report. [Online]. Available: https://www.gsma.com/betterfuture/wp-content/uploads/2022/05/Moble-Net-Zero-State-of-the-Industry-on-Climate-Action-2022.pdf
  4. A new industrial strategy for a globally competitive, green and digital europe. [Online]. Available: https://ec.europa.eu/commission/presscorner/detail/en/fs_20_425
  5. A. Kaloxylos, A. Gavras, D. Camps Mur, M. Ghoraishi, and H. Hrasnica, “Ai and ml – enablers for beyond 5g networks,” Dec. 2020. [Online]. Available: https://doi.org/10.5281/zenodo.4299895
  6. S. Savazzi, V. Rampa, S. Kianoush, and M. Bennis, “An energy and carbon footprint analysis of distributed and federated learning,” IEEE Transactions on Green Communications and Networking, pp. 1–1, 2022.
  7. M. Shaheen, M. S. Farooq, T. Umer, and B.-S. Kim, “Applications of federated learning; taxonomy, challenges, and research trends,” Electronics, vol. 11, no. 4, 2022. [Online]. Available: https://www.mdpi.com/2079-9292/11/4/670
  8. T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, May 2020.
  9. ETSI, “Permissioned Distributed Ledger (PDL);Federated Data Management,” Technical Report ETSI GR PDL 009 V1.1.1, September 2021.
  10. 3GPP, “3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Study of Enablers for Network Automation for the 5G System (5GS); Phase 3 (Release 18),” Technical Report 3GPP TR 23.700-80 3GPP TR 23.700-80, December 2022.
  11. ETSI, “5G; Architecture enhancements for 5G System (5GS) to support network data analytics services (3GPP TS 23.288 version 17.4.0 Release 17),” Technical Specification ETSI TS 123 288 V17.4.0, May 2022.
  12. S. Niknam, H. S. Dhillon, and J. H. Reed, “Federated learning for wireless communications: Motivation, opportunities, and challenges,” IEEE Communications Magazine, vol. 58, no. 6, pp. 46–51, 2020.
  13. Test case definition and test site description part 1. [Online]. Available: https://5gcroco.eu/images/templates/rsvario/images/5GCroCo_D2_1.pdf
  14. Q. Wang, Y. Xiao, H. Zhu, Z. Sun, Y. Li, and X. Ge, “Towards energy-efficient federated edge intelligence for iot networks,” in 2021 IEEE 41st International Conference on Distributed Computing Systems Workshops (ICDCSW), 2021, pp. 55–62.
  15. X. Cao, F. Wang, J. Xu, R. Zhang, and S. Cui, “Joint computation and communication cooperation for energy-efficient mobile edge computing,” IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4188–4200, 2019.
  16. Q. Zeng, Y. Du, K. Huang, and K. K. Leung, “Energy-efficient radio resource allocation for federated edge learning,” in 2020 IEEE International Conference on Communications Workshops (ICC Workshops), 2020, pp. 1–6.
  17. J. Zhang, Y. Liu, X. Qin, and X. Xu, “Energy-efficient federated learning framework for digital twin-enabled industrial internet of things,” in 2021 IEEE 32nd Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2021, pp. 1160–1166.
  18. Y. Zhan, P. Li, L. Wu, and S. Guo, “L4l: Experience-driven computational resource control in federated learning,” IEEE Transactions on Computers, vol. 71, no. 4, pp. 971–983, 2022.
  19. X. Mo and J. Xu, “Energy-efficient federated edge learning with joint communication and computation design,” Journal of Communications and Information Networks, vol. 6, no. 2, pp. 110–124, 2021.
  20. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in Proceedings of the 35th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, J. Dy and A. Krause, Eds., vol. 80.   PMLR, 10–15 Jul 2018, pp. 1861–1870. [Online]. Available: https://proceedings.mlr.press/v80/haarnoja18b.html
  21. J. García and F. Fernández, “A comprehensive survey on safe reinforcement learning,” J. Mach. Learn. Res., vol. 16, no. 1, p. 1437–1480, jan 2015.
  22. F. Zhang, G. Han, L. Liu, M. Martinez-Garcia, and Y. Peng, “Deep reinforcement learning based cooperative partial task offloading and resource allocation for iiot applications,” IEEE Transactions on Network Science and Engineering, pp. 1–1, 2022.
  23. F. Rezazadeh, H. Chergui, L. Christofi, and C. Verikoukis, “Actor-critic-based learning for zero-touch joint resource and energy control in network slicing,” in ICC 2021 - IEEE International Conference on Communications, 2021, pp. 1–6.
  24. H. Zhou, K. Jiang, X. Liu, X. Li, and V. C. M. Leung, “Deep reinforcement learning for energy-efficient computation offloading in mobile-edge computing,” IEEE Internet of Things Journal, vol. 9, no. 2, pp. 1517–1530, 2022.
  25. I. AlQerm and B. Shihada, “Energy efficient traffic offloading in multi-tier heterogeneous 5g networks using intuitive online reinforcement learning,” IEEE Transactions on Green Communications and Networking, vol. 3, no. 3, pp. 691–702, 2019.
  26. J. Ren, J. Sun, H. Tian, W. Ni, G. Nie, and Y. Wang, “Joint resource allocation for efficient federated learning in internet of things supported by edge computing,” in 2021 IEEE International Conference on Communications Workshops (ICC Workshops), 2021, pp. 1–6.
  27. Z. Yang, M. Chen, W. Saad, C. S. Hong, and M. Shikh-Bahaei, “Energy efficient federated learning over wireless communication networks,” Trans. Wireless. Comm., vol. 20, no. 3, p. 1935–1949, mar 2021. [Online]. Available: https://doi.org/10.1109/TWC.2020.3037554
  28. G. Wang, F. Xu, H. Zhang, and C. Zhao, “Joint resource management for mobility supported federated learning in internet of vehicles,” Future Generation Computer Systems, vol. 129, pp. 199–211, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167739X2100457X
  29. H. T. Nguyen, N. Cong Luong, J. Zhao, C. Yuen, and D. Niyato, “Resource allocation in mobility-aware federated learning networks: A deep reinforcement learning approach,” in 2020 IEEE 6th World Forum on Internet of Things (WF-IoT), 2020, pp. 1–6.
  30. H. Wang, Z. Kaplan, D. Niu, and B. Li, “Optimizing federated learning on non-iid data with reinforcement learning,” in IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, 2020, pp. 1698–1707.
  31. W. Zhang, D. Yang, W. Wu, H. Peng, N. Zhang, H. Zhang, and X. Shen, “Optimizing federated learning in distributed industrial iot: A multi-agent approach,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 12, pp. 3688–3703, 2021.
  32. D. Dutta and S. R. Upreti, “A survey and comparative evaluation of actor-critic methods in process control,” The Canadian Journal of Chemical Engineering, vol. 100, no. 9, pp. 2028–2056, 2022. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/cjce.24508
  33. R. F. Prudencio, M. R. O. A. Maximo, and E. L. Colombini, “A survey on offline reinforcement learning: Taxonomy, review, and open problems,” 2022. [Online]. Available: https://arxiv.org/abs/2203.01387
  34. T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel, and S. Levine, “Soft actor-critic algorithms and applications,” 2018. [Online]. Available: https://arxiv.org/abs/1812.05905
  35. C. P. Robert, “Simulation of truncated normal variables,” Statistics and Computing, vol. 5, no. 2, pp. 121–125, Jun. 1995. [Online]. Available: https://doi.org/10.1007/BF00143942
  36. K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konečnỳ, S. Mazzocchi, B. McMahan et al., “Towards federated learning at scale: System design,” Proceedings of machine learning and systems, vol. 1, pp. 374–388, 2019.
  37. Y. LeCun, C. Cortes, and C. Burges, “Mnist handwritten digit database,” ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, vol. 2, 2010.
  38. A. Krizhevsky, “Learning multiple layers of features from tiny images,” University of Toronto, 05 2012.
  39. Keras-flops. [Online]. Available: github.com/tokusumi/keras-flops
  40. A. Raffin, A. Hill, A. Gleave, A. Kanervisto, M. Ernestus, and N. Dormann, “Stable-baselines3: Reliable reinforcement learning implementations,” J. Mach. Learn. Res., vol. 22, no. 1, jul 2022.
  41. Stable-baselines3. [Online]. Available: https://stable-baselines3.readthedocs.io/en/master/modules/sac.html
  42. X. Ma, J. Zhu, Z. Lin, S. Chen, and Y. Qin, “A state-of-the-art survey on solving non-iid data in federated learning,” Future Generation Computer Systems, vol. 135, pp. 244–258, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167739X22001686
  43. L. Magoula, N. Koursioumpas, A.-I. Thanopoulos, T. Panagea, N. Petropouleas, M. A. Gutierrez-Estevez, and R. Khalili, “A safe genetic algorithm approach for energy efficient federated learning in wireless communication networks,” in 2023 IEEE 34th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2023, pp. 1–6.
  44. Z. Jiang, W. Wang, B. Li, and Q. Yang, “Towards efficient synchronous federated training: A survey on system optimization strategies,” IEEE Transactions on Big Data, pp. 1–1, 2022.
Citations (1)

Summary

We haven't generated a summary for this paper yet.