Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attention-based Open RAN Slice Management using Deep Reinforcement Learning (2306.09490v1)

Published 15 Jun 2023 in cs.DC, cs.LG, cs.NI, cs.SY, and eess.SY

Abstract: As emerging networks such as Open Radio Access Networks (O-RAN) and 5G continue to grow, the demand for various services with different requirements is increasing. Network slicing has emerged as a potential solution to address the different service requirements. However, managing network slices while maintaining quality of services (QoS) in dynamic environments is a challenging task. Utilizing ML approaches for optimal control of dynamic networks can enhance network performance by preventing Service Level Agreement (SLA) violations. This is critical for dependable decision-making and satisfying the needs of emerging networks. Although RL-based control methods are effective for real-time monitoring and controlling network QoS, generalization is necessary to improve decision-making reliability. This paper introduces an innovative attention-based deep RL (ADRL) technique that leverages the O-RAN disaggregated modules and distributed agent cooperation to achieve better performance through effective information extraction and implementing generalization. The proposed method introduces a value-attention network between distributed agents to enable reliable and optimal decision-making. Simulation results demonstrate significant improvements in network performance compared to other DRL baseline methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)
  1. C. Li and A. Akman, “O-RAN use cases and deployment scenarios. towards open and smart ran,” White paper, 2020.
  2. 3GPP TS 23.501 version 16.6.0 Release 16, “5G; system architecture for the 5G system (5GS),” Tech. Spec., , no. 3, 2020.
  3. “5G wireless network slicing for eMBB, URLLC, and mMTC: A communication-theoretic view,” IEEE Access, vol. 6, pp. 55765–55779, 2018.
  4. “Data-driven network slicing from core to RAN for 5G broadcasting services,” IEEE Transactions on Broadcasting, vol. 67, no. 1, pp. 23–32, 2020.
  5. 3GPP TR 38.912 version 14.1.0 Release 14, “Study on new radio access technology: Radio access architecture and interfaces,” Tech. Rep, , no. 3, 2017.
  6. O-RAN Working Group 1, “Study on o-ran slicing-v2.00,” O-RAN.WG1.Study-on-O-RAN-Slicing-v02.00 Technical Specification, April 2020.
  7. “Understanding O-RAN: Architecture, interfaces, algorithms, security, and research challenges,” arXiv preprint arXiv:2202.01032, 2022.
  8. “Predictive closed-loop service automation in o-ran based network slicing,” IEEE Communications Standards Magazine, vol. 6, no. 3, pp. 8–14, 2022.
  9. “Reinforcement learning based resource allocation for network slices in O-RAN midhaul,” pp. 140–145, 2023.
  10. “Team learning-based resource allocation for open radio access network (O-RAN),” in ICC 2022-IEEE International Conference on Communications. IEEE, 2022, pp. 4938–4943.
  11. “Federated deep reinforcement learning for resource allocation in O-RAN slicing,” in GLOBECOM 2022-2022 IEEE Global Communications Conference. IEEE, 2022, pp. 958–963.
  12. “Evolutionary deep reinforcement learning for dynamic slice management in O-RAN,” in 2022 IEEE Globecom Workshops (GC Wkshps). IEEE, 2022, pp. 227–232.
  13. “ColO-RAN: Developing machine learning-based xapps for open ran closed-loop control on programmable experimental platforms,” IEEE Transactions on Mobile Computing, 2022.
  14. N. Hammami and K. Nguyen, “On-policy vs. off-policy deep reinforcement learning for resource allocation in open radio access network,” in 2022 IEEE Wireless Communications and Networking Conference (WCNC). IEEE, 2022, pp. 1461–1466.
  15. “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International conference on machine learning. PMLR, 2018, pp. 1861–1870.
  16. “Counterfactual multi-agent policy gradients,” in Proceedings of the AAAI conference on artificial intelligence, 2018, vol. 32.
  17. S. Iqbal and F. Sha, “Actor-attention-critic for multi-agent reinforcement learning,” in International conference on machine learning. PMLR, 2019, pp. 2961–2970.
Citations (5)

Summary

We haven't generated a summary for this paper yet.