Attention-based Open RAN Slice Management using Deep Reinforcement Learning (2306.09490v1)
Abstract: As emerging networks such as Open Radio Access Networks (O-RAN) and 5G continue to grow, the demand for various services with different requirements is increasing. Network slicing has emerged as a potential solution to address the different service requirements. However, managing network slices while maintaining quality of services (QoS) in dynamic environments is a challenging task. Utilizing ML approaches for optimal control of dynamic networks can enhance network performance by preventing Service Level Agreement (SLA) violations. This is critical for dependable decision-making and satisfying the needs of emerging networks. Although RL-based control methods are effective for real-time monitoring and controlling network QoS, generalization is necessary to improve decision-making reliability. This paper introduces an innovative attention-based deep RL (ADRL) technique that leverages the O-RAN disaggregated modules and distributed agent cooperation to achieve better performance through effective information extraction and implementing generalization. The proposed method introduces a value-attention network between distributed agents to enable reliable and optimal decision-making. Simulation results demonstrate significant improvements in network performance compared to other DRL baseline methods.
- C. Li and A. Akman, “O-RAN use cases and deployment scenarios. towards open and smart ran,” White paper, 2020.
- 3GPP TS 23.501 version 16.6.0 Release 16, “5G; system architecture for the 5G system (5GS),” Tech. Spec., , no. 3, 2020.
- “5G wireless network slicing for eMBB, URLLC, and mMTC: A communication-theoretic view,” IEEE Access, vol. 6, pp. 55765–55779, 2018.
- “Data-driven network slicing from core to RAN for 5G broadcasting services,” IEEE Transactions on Broadcasting, vol. 67, no. 1, pp. 23–32, 2020.
- 3GPP TR 38.912 version 14.1.0 Release 14, “Study on new radio access technology: Radio access architecture and interfaces,” Tech. Rep, , no. 3, 2017.
- O-RAN Working Group 1, “Study on o-ran slicing-v2.00,” O-RAN.WG1.Study-on-O-RAN-Slicing-v02.00 Technical Specification, April 2020.
- “Understanding O-RAN: Architecture, interfaces, algorithms, security, and research challenges,” arXiv preprint arXiv:2202.01032, 2022.
- “Predictive closed-loop service automation in o-ran based network slicing,” IEEE Communications Standards Magazine, vol. 6, no. 3, pp. 8–14, 2022.
- “Reinforcement learning based resource allocation for network slices in O-RAN midhaul,” pp. 140–145, 2023.
- “Team learning-based resource allocation for open radio access network (O-RAN),” in ICC 2022-IEEE International Conference on Communications. IEEE, 2022, pp. 4938–4943.
- “Federated deep reinforcement learning for resource allocation in O-RAN slicing,” in GLOBECOM 2022-2022 IEEE Global Communications Conference. IEEE, 2022, pp. 958–963.
- “Evolutionary deep reinforcement learning for dynamic slice management in O-RAN,” in 2022 IEEE Globecom Workshops (GC Wkshps). IEEE, 2022, pp. 227–232.
- “ColO-RAN: Developing machine learning-based xapps for open ran closed-loop control on programmable experimental platforms,” IEEE Transactions on Mobile Computing, 2022.
- N. Hammami and K. Nguyen, “On-policy vs. off-policy deep reinforcement learning for resource allocation in open radio access network,” in 2022 IEEE Wireless Communications and Networking Conference (WCNC). IEEE, 2022, pp. 1461–1466.
- “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International conference on machine learning. PMLR, 2018, pp. 1861–1870.
- “Counterfactual multi-agent policy gradients,” in Proceedings of the AAAI conference on artificial intelligence, 2018, vol. 32.
- S. Iqbal and F. Sha, “Actor-attention-critic for multi-agent reinforcement learning,” in International conference on machine learning. PMLR, 2019, pp. 2961–2970.