Lifelong Learning for Fog Load Balancing: A Transfer Learning Approach (2310.05187v1)
Abstract: Fog computing emerged as a promising paradigm to address the challenges of processing and managing data generated by the Internet of Things (IoT). Load balancing (LB) plays a crucial role in Fog computing environments to optimize the overall system performance. It requires efficient resource allocation to improve resource utilization, minimize latency, and enhance the quality of service for end-users. In this work, we improve the performance of privacy-aware Reinforcement Learning (RL) agents that optimize the execution delay of IoT applications by minimizing the waiting delay. To maintain privacy, these agents optimize the waiting delay by minimizing the change in the number of queued requests in the whole system, i.e., without explicitly observing the actual number of requests that are queued in each Fog node nor observing the compute resource capabilities of those nodes. Besides improving the performance of these agents, we propose in this paper a lifelong learning framework for these agents, where lightweight inference models are used during deployment to minimize action delay and only retrained in case of significant environmental changes. To improve the performance, minimize the training cost, and adapt the agents to those changes, we explore the application of Transfer Learning (TL). TL transfers the knowledge acquired from a source domain and applies it to a target domain, enabling the reuse of learned policies and experiences. TL can be also used to pre-train the agent in simulation before fine-tuning it in the real environment; this significantly reduces failure probability compared to learning from scratch in the real environment. To our knowledge, there are no existing efforts in the literature that use TL to address lifelong learning for RL-based Fog LB; this is one of the main obstacles in deploying RL LB solutions in Fog systems.
- M. Ebrahim, A. Hafid, and E. Elie, “Blockchain as privacy and security solution for smart environments: A survey,” 2022.
- S. Douch, M. R. Abid, K. Zine-Dine, D. Bouzidi, and D. Benhaddou, “Edge computing technology enablers: A systematic lecture study,” IEEE Access, vol. 10, pp. 69 264–69 302, 2022.
- M. Ebrahim and A. Hafid, “Privacy-aware load balancing in fog networks: A reinforcement learning approach,” 2023.
- M. Egorov, “Deep reinforcement learning with pomdps,” 2015. [Online]. Available: https://cs229.stanford.edu/proj2015/363_report.pdf
- Y. Sun, M. Peng, Y. Ren, L. Chen, L. Yu, and S. Suo, “Harmonizing artificial intelligence with radio access networks: Advances, case study, and open issues,” IEEE Network, vol. 35, no. 4, pp. 144–151, 2021.
- M. Ebrahim, M. Al-Ayyoub, and M. A. Alsmirat, “Will transfer learning enhance imagenet classification accuracy using imagenet-pretrained models?” in 2019 10th International Conference on Information and Communication Systems (ICICS), 2019, pp. 211–216.
- S. P. Singh and R. S. Sutton, “Reinforcement learning with replacing eligibility traces,” Machine learning, vol. 22, pp. 123–158, 1996.
- M. E. Taylor and P. Stone, “Transfer learning for reinforcement learning domains: A survey,” Journal of Machine Learning Research, vol. 10, no. 56, pp. 1633–1685, 2009. [Online]. Available: http://jmlr.org/papers/v10/taylor09a.html
- F. Fernández and M. Veloso, “Probabilistic policy reuse in a reinforcement learning agent,” in Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, ser. AAMAS ’06. New York, NY, USA: Association for Computing Machinery, 2006, p. 720–727. [Online]. Available: https://doi.org/10.1145/1160633.1160762
- E. Gures, I. Shayea, M. Ergen, M. H. Azmi, and A. A. El-Saleh, “Machine learning-based load balancing algorithms in future heterogeneous networks: A survey,” IEEE Access, vol. 10, pp. 37 689–37 717, 2022.
- Z. Zhu, K. Lin, A. K. Jain, and J. Zhou, “Transfer learning in deep reinforcement learning: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–20, 2023.
- C. Finn and S. Levine, “Meta-learning: from few-shot learning to rapid reinforcement learning,” in ICML, 2019. [Online]. Available: https://sites.google.com/view/icml19metalearning
- S. Thrun, “Is learning the n-th thing any easier than learning the first?” in Advances in Neural Information Processing Systems, D. Touretzky, M. Mozer, and M. Hasselmo, Eds., vol. 8. MIT Press, 1995. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/1995/file/bdb106a0560c4e46ccc488ef010af787-Paper.pdf
- R. S. Sutton, A. Koop, and D. Silver, “On the role of tracking in stationary environments,” in Proceedings of the 24th International Conference on Machine Learning, ser. ICML ’07. New York, NY, USA: Association for Computing Machinery, 2007, p. 871–878. [Online]. Available: https://doi.org/10.1145/1273496.1273606
- O. G. Selfridge, R. S. Sutton, and A. G. Barto, “Training and tracking in robotics,” in Proceedings of the 9th International Joint Conference on Artificial Intelligence - Volume 1, ser. IJCAI’85. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1985, p. 670–672.
- M. Asada, S. Noda, S. Tawaratsumida, and K. Hosoda, “Vision-based behavior acquisition for a shooting robot by using a reinforcement learning,” Proc IAPR/IEEE Workshop on Visual Behaviors, pp. 112–118, 09 1994.
- D. Wu, J. Kang, Y. T. Xu, H. Li, J. Li, X. Chen, D. Rivkin, M. Jenkin, T. Lee, I. Park, X. Liu, and G. Dudek, “Load balancing for communication networks via data-efficient deep reinforcement learning,” in 2021 IEEE Global Communications Conference (GLOBECOM), 2021, pp. 01–07.
- C. Mechalikh, H. Taktak, and F. Moussa, “A fuzzy decision tree based tasks orchestration algorithm for edge computing environments,” in Advanced Information Networking and Applications, L. Barolli, F. Amato, F. Moscato, T. Enokido, and M. Takizawa, Eds. Cham: Springer International Publishing, 2020, pp. 193–203.
- H. Yang, X. Xie, and M. Kadoch, “Intelligent resource management based on reinforcement learning for ultra-reliable and low-latency iov communication networks,” IEEE Transactions on Vehicular Technology, vol. 68, no. 5, pp. 4157–4169, 2019.
- Y. Sun, M. Peng, and S. Mao, “Deep reinforcement learning-based mode selection and resource management for green fog radio access networks,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 1960–1971, 2019.
- I. Lera, C. Guerrero, and C. Juiz, “Analyzing the applicability of a multi-criteria decision method in fog computing placement problem,” in 2019 Fourth International Conference on Fog and Mobile Edge Computing (FMEC), 2019, pp. 13–20.
- ——, “YAFS: A simulator for IoT scenarios in fog computing,” IEEE Access, vol. 7, pp. 91 745–91 758, 2019.
- M. Ebrahim and A. Hafid, “Resilience and load balancing in fog networks: A multi-criteria decision analysis approach,” Microprocessors and Microsystems, vol. 101, p. 104893, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0141933123001370
- V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” 2013.
- H. van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double q-learning,” 2015. [Online]. Available: https://arxiv.org/abs/1509.06461
- M. Hausknecht and P. Stone, “Deep recurrent q-learning for partially observable mdps,” 2017.
- A. Elmokashfi, A. Kvalbein, and C. Dovrolis, “On the scalability of bgp: The role of topology growth,” IEEE Journal on Selected Areas in Communications, vol. 28, no. 8, pp. 1250–1261, 2010.
- U. Brandes, “A faster algorithm for betweenness centrality,” The Journal of Mathematical Sociology, vol. 25, no. 2, pp. 163–177, 2001. [Online]. Available: https://doi.org/10.1080/0022250X.2001.9990249
- S. Guadarrama, A. Korattikara, O. Ramirez, P. Castro, E. Holly, S. Fishman, K. Wang, E. Gonina, N. Wu, E. Kokiopoulou, L. Sbaiz, J. Smith, G. Bartók, J. Berent, C. Harris, V. Vanhoucke, and E. Brevdo, “TF-Agents: A library for reinforcement learning in tensorflow,” https://github.com/tensorflow/agents, 2018, [Online; accessed 01-December-2022]. [Online]. Available: https://github.com/tensorflow/agents
- Maad Ebrahim (7 papers)
- Abdelhakim Senhaji Hafid (9 papers)
- Mohamed Riduan Abid (3 papers)