Constrained Reinforcement Learning for Adaptive Controller Synchronization in Distributed SDN (2403.08775v1)
Abstract: In software-defined networking (SDN), the implementation of distributed SDN controllers, with each controller responsible for managing a specific sub-network or domain, plays a critical role in achieving a balance between centralized control, scalability, reliability, and network efficiency. These controllers must be synchronized to maintain a logically centralized view of the entire network. While there are various approaches for synchronizing distributed SDN controllers, most tend to prioritize goals such as optimization of communication latency or load balancing, often neglecting to address both the aspects simultaneously. This limitation becomes particularly significant when considering applications like Augmented and Virtual Reality (AR/VR), which demand constrained network latencies and substantial computational resources. Additionally, many existing studies in this field predominantly rely on value-based reinforcement learning (RL) methods, overlooking the potential advantages offered by state-of-the-art policy-based RL algorithms. To bridge this gap, our work focuses on examining deep reinforcement learning (DRL) techniques, encompassing both value-based and policy-based methods, to guarantee an upper latency threshold for AR/VR task offloading within SDN environments, while selecting the most cost-effective servers for AR/VR task offloading. Our evaluation results indicate that while value-based methods excel in optimizing individual network metrics such as latency or load balancing, policy-based approaches exhibit greater robustness in adapting to sudden network changes or reconfiguration.
- Kreutz, D., Ramos, F. M. V., Veríssimo, P. E., Rothenberg, C. E., Azodolmolky, S., and Uhlig, S., “Software-defined networking: A comprehensive survey,” Proceedings of the IEEE, vol. 103, no. 1, pp. 14–76, 2015.
- Nunes, B. A. A., Mendonca, M., Nguyen, X.-N., Obraczka, K., and Turletti, T., “A survey of software-defined networking: Past, present, and future of programmable networks,” IEEE Communications Surveys & Tutorials, vol. 16, no. 3, pp. 1617–1634, 2014.
- Scott-Hayward, S., Natarajan, S., and Sezer, S., “A survey of security in software defined networks,” IEEE Communications Surveys & Tutorials, vol. 18, no. 1, pp. 623–654, 2016.
- Dharma, N. I. G., Muthohar, M. F., Prayuda, J. D. A., Priagung, K., and Choi, D., “Time-based ddos detection and mitigation for sdn controller,” in 2015 17th Asia-Pacific Network Operations and Management Symposium (APNOMS), 2015, pp. 550–553.
- Bannour, F., Souihi, S., and Mellouk, A., “Distributed sdn control: Survey, taxonomy, and challenges,” IEEE Communications Surveys & Tutorials, vol. 20, no. 1, pp. 333–354, 2018.
- Sakic, E. and Kellerer, W., “Impact of adaptive consistency on distributed sdn applications: An empirical study,” IEEE Journal on Selected Areas in Communications, vol. 36, no. 12, pp. 2702–2715, 2018.
- Chen, X. and Liu, G., “Joint optimization of task offloading and resource allocation via deep reinforcement learning for augmented reality in mobile edge network,” in 2020 IEEE International Conference on Edge Computing (EDGE), 2020, pp. 76–82.
- Nyamtiga, B. W., Hermawan, A. A., Luckyarno, Y. F., Kim, T.-W., Jung, D.-Y., Kwak, J. S., and Yun, J.-H., “Edge-computing-assisted virtual reality computation offloading: An empirical study,” IEEE Access, vol. 10, pp. 95 892–95 907, 2022.
- Erol-Kantarci, M. and Sukhmani, S., “Caching and computing at the edge for mobile augmented reality and virtual reality (ar/vr) in 5g,” in Ad Hoc Networks, Zhou, Y. and Kunz, T., Eds. Cham: Springer International Publishing, 2018, pp. 169–177.
- Dab, B., Aitsaadi, N., and Langar, R., “Joint optimization of offloading and resource allocation scheme for mobile edge computing,” in 2019 IEEE Wireless Communications and Networking Conference (WCNC), 2019, pp. 1–7.
- Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M. et al., “Mastering the game of go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, p. 484, 2016.
- Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G. et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
- Zhang, Z., Ma, L., Poularakis, K., Leung, K. K., and Wu, L., “Dq scheduler: Deep reinforcement learning based controller synchronization in distributed sdn,” in ICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019, pp. 1–7.
- Mudvari, A., Poularakis, K., and Tassiulas, L., “Robust sdn synchronization in mobile networks using deep reinforcement and transfer learning,” in ICC 2023 - IEEE International Conference on Communications, 2023, pp. 1080–1085.
- Poularakis, K., Qin, Q., Ma, L., Kompella, S., Leung, K. K., and Tassiulas, L., “Learning the optimal synchronization rates in distributed sdn control architectures,” in IEEE INFOCOM 2019 - IEEE Conference on Computer Communications, 2019, pp. 1099–1107.
- Zhang, Z., Ma, L., Poularakis, K., Leung, K. K., Tucker, J., and Swami, A., “Macs: Deep reinforcement learning based sdn controller synchronization policy design,” in 2019 IEEE 27th International Conference on Network Protocols (ICNP), 2019, pp. 1–11.
- Mudvari, A. and Tassiulas, L., “Joint sdn synchronization and controller placement in wireless networks using deep reinforcement learning,” Year of Publication. [Online]. Available: URL
- Yang, X., Chen, Z., Li, K., Sun, Y., Liu, N., Xie, W., and Zhao, Y., “Communication-constrained mobile edge computing systems for wireless virtual reality: Scheduling and tradeoff,” IEEE Access, vol. 6, pp. 16 665–16 677, 2018.
- Kuzniar, M., Perešíni, P., and Kostic, D., “What you need to know about sdn flow tables,” in International Conference on Passive and Active Network Measurement. Springer, 2015, pp. 347–359.
- Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M., “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.
- van Hasselt, H., Guez, A., and Silver, D., “Deep reinforcement learning with double q-learning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, 2016.
- Williams, R. J., “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” Machine learning, vol. 8, no. 3-4, pp. 229–256, 1992.
- Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O., “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.