Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DRL-based Latency-Aware Network Slicing in O-RAN with Time-Varying SLAs (2401.05042v2)

Published 10 Jan 2024 in cs.NI

Abstract: The Open Radio Access Network (Open RAN) paradigm, and its reference architecture proposed by the O-RAN Alliance, is paving the way toward open, interoperable, observable and truly intelligent cellular networks. Crucial to this evolution is Machine Learning (ML), which will play a pivotal role by providing the necessary tools to realize the vision of self-organizing O-RAN systems. However, to be actionable, ML algorithms need to demonstrate high reliability, effectiveness in delivering high performance, and the ability to adapt to varying network conditions, traffic demands and performance requirements. To address these challenges, in this paper we propose a novel Deep Reinforcement Learning (DRL) agent design for O-RAN applications that can learn control policies under varying Service Level Agreement (SLAs) with heterogeneous minimum performance requirements. We focus on the case of RAN slicing and SLAs specifying maximum tolerable end-to-end latency levels. We use the OpenRAN Gym open-source environment to train a DRL agent that can adapt to varying SLAs and compare it against the state-of-the-art. We show that our agent maintains a low SLA violation rate that is 8.3x and 14.4x lower than approaches based on Deep Q- Learning (DQN) and Q-Learning while consuming respectively 0.3x and 0.6x fewer resources without the need for re-training.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. M. Polese, L. Bonati, S. D’oro, S. Basagni, and T. Melodia, “Understanding O-RAN: Architecture, interfaces, algorithms, security, and research challenges,” IEEE Communications Surveys & Tutorials, 2023.
  2. H. Zhou, M. Elsayed, and M. Erol-Kantarci, “RAN resource slicing in 5G using multi-agent correlated Q-learning,” in 2021 IEEE 32nd Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC).   IEEE, 2021, pp. 1179–1184.
  3. O-RAN Alliance White Paper. (2020) O-RAN Use Cases and Deployment Scenarios. Accessed: December 6, 2023. [Online]. Available: https://tinyurl.com/8cmtxmyp
  4. T. T. Nguyen, N. D. Nguyen, and S. Nahavandi, “Deep reinforcement learning for multiagent systems: A review of challenges, solutions, and applications,” IEEE transactions on cybernetics, vol. 50, no. 9, pp. 3826–3839, 2020.
  5. M. Polese, L. Bonati, S. D’Oro, S. Basagni, and T. Melodia, “ColO-RAN: Developing machine learning-based xApps for open RAN closed-loop control on programmable experimental platforms,” IEEE Transactions on Mobile Computing, 2022.
  6. S. Wijethilaka and M. Liyanage, “Survey on Network Slicing for Internet of Things Realization in 5G Networks,” IEEE Communications Surveys & Tutorials, vol. 23, no. 2, pp. 957–994, 2021.
  7. Y. Wu, H.-N. Dai, H. Wang, Z. Xiong, and S. Guo, “A survey of intelligent network slicing management for industrial IoT: Integrated approaches for smart transportation, smart energy, and smart factory,” IEEE Communications Surveys & Tutorials, vol. 24, no. 2, pp. 1175–1211, 2022.
  8. F. Rezazadeh, H. Chergui, and J. Mangues-Bafalluy, “Explanation-guided deep reinforcement learning for trustworthy 6g ran slicing,” arXiv preprint arXiv:2303.15000, 2023.
  9. M. Kouchaki and V. Marojevic, “Actor-critic network for O-RAN resource allocation: xApp design, deployment, and analysis,” in 2022 IEEE Globecom Workshops (GC Wkshps).   IEEE, 2022, pp. 968–973.
  10. C. V. Nahum, V. H. Lopes, R. M. Dreifuerst, P. Batista, I. Correa, K. V. Cardoso, A. Klautau, and R. W. Heath, “Intent-aware radio resource scheduling in a ran slicing scenario using reinforcement learning,” IEEE Transactions on Wireless Communications, 2023.
  11. F. Lotfi, O. Semiari, and F. Afghah, “Evolutionary deep reinforcement learning for dynamic slice management in o-ran,” in 2022 IEEE Globecom Workshops (GC Wkshps).   IEEE, 2022, pp. 227–232.
  12. K. Boutiba, M. Bagaa, and A. Ksentini, “Radio Resource Management in Multi-numerology 5G New Radio featuring Network Slicing,” in ICC 2022 - IEEE International Conference on Communications, 2022, pp. 359–364.
  13. A. Filali, B. Nour, S. Cherkaoui, and A. Kobbane, “Communication and computation O-RAN resource slicing for URLLC services using deep reinforcement learning,” IEEE Communications Standards Magazine, vol. 7, no. 1, pp. 66–73, 2023.
  14. Y. Niv, R. Daniel, A. Geana, S. J. Gershman, Y. C. Leong, A. Radulescu, and R. C. Wilson, “Reinforcement Learning in Multidimensional Environments Relies on Attention Mechanisms,” Journal of Neuroscience, vol. 35, no. 21, pp. 8145–8157, 2015. [Online]. Available: https://www.jneurosci.org/content/35/21/8145
  15. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  16. M. Tsampazi, S. D’Oro, M. Polese, L. Bonati, G. Poitau, M. Healy, and T. Melodia, “A Comparative Analysis of Deep Reinforcement Learning-based xApps in O-RAN,” in Proceedings of IEEE GLOBECOM, Kuala Lumpur, Malaysia, December 2023.
  17. “Colosseum,” .https://www.colosseum.net/, [Online; accessed 2023].
  18. L. Bonati, M. Polese, S. D’Oro, S. Basagni, and T. Melodia, “OpenRAN Gym: An open toolbox for data collection and experimentation with AI in O-RAN,” in 2022 IEEE Wireless Communications and Networking Conference (WCNC).   IEEE, 2022, pp. 518–523.
  19. L. Bonati, S. D’Oro, S. Basagni, and T. Melodia, “SCOPE: An open and softwarized prototyping platform for NextG systems,” in Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, 2021, pp. 415–426.
  20. “OpenCelliD,” .https://opencellid.org/, [Online; accessed 2023].
  21. “MGEN,” .https://www.nrl.navy.mil/itd/ncs/products/mgen, [Online; accessed 2023].
Citations (7)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com