Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Deep Reinforcement Learning Approach for Adaptive Traffic Routing in Next-gen Networks (2402.04515v1)

Published 7 Feb 2024 in cs.NI and cs.AI

Abstract: Next-gen networks require significant evolution of management to enable automation and adaptively adjust network configuration based on traffic dynamics. The advent of software-defined networking (SDN) and programmable switches enables flexibility and programmability. However, traditional techniques that decide traffic policies are usually based on hand-crafted programming optimization and heuristic algorithms. These techniques make non-realistic assumptions, e.g., considering static network load and topology, to obtain tractable solutions, which are inadequate for next-gen networks. In this paper, we design and develop a deep reinforcement learning (DRL) approach for adaptive traffic routing. We design a deep graph convolutional neural network (DGCNN) integrated into the DRL framework to learn the traffic behavior from not only the network topology but also link and node attributes. We adopt the Deep Q-Learning technique to train the DGCNN model in the DRL framework without the need for a labeled training dataset, enabling the framework to quickly adapt to traffic dynamics. The model leverages q-value estimates to select the routing path for every traffic flow request, balancing exploration and exploitation. We perform extensive experiments with various traffic patterns and compare the performance of the proposed approach with the Open Shortest Path First (OSPF) protocol. The experimental results show the effectiveness and adaptiveness of the proposed framework by increasing the network throughput by up to 7.8% and reducing the traffic delay by up to 16.1% compared to OSPF.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. Intel tofino. [Online]. Available: www.intel.sg/content/www/xa/en/products/network-io/programmable-ethernet-switch/tofino-2-series.html
  2. Nvidia. [Online]. Available: https://www.nvidia.com/en-sg/networking/ethernet-switching/cumulus-vx/
  3. H. Jiang, Q. Li, Y. Jiang, G. Shen, R. Sinnott, C. Tian, and M. Xu, “When machine learning meets congestion control: A survey and comparison,” Computer Networks, vol. 192, p. 108033, 2021.
  4. T. Truong-Huu, P. Prathap, P. M. Mohan, and M. Gurusamy, “Fast and Adaptive Failure Recovery using Machine Learning in Software Defined Networks,” in 2019 IEEE ICC Workshops, 2019.
  5. A. Valadarsky, M. Schapira, D. Shahaf, and A. Tamar, “Learning to Route,” in ACM HotNets 2017, Nov. 2017.
  6. M. Wang, Y. Cui, X. Wang, S. Xiao, and J. Jiang, “Machine learning for networking: Workflow, advances, and opportunities,” IEEE Network, vol. 32, no. 2, pp. 92–99, 2017.
  7. J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, and M. Sun, “Graph Neural Networks: A Review of Methods and Applications,” CoRR, 2018.
  8. Z. Xu, J. Tang, J. Meng, W. Zhang, Y. Wang, C. H. Liu, and D. Yang, “Experience-Driven Networking: A Deep Reinforcement Learning-Based Approach,” in IEEE INFOCOM 2018, Apr. 2018.
  9. S. Abbasloo, C.-Y. Yen, and H. J. Chao, “Classic Meets Modern: A Pragmatic Learning-Based Congestion Control for the Internet,” in ACM SIGCOMM ’20, 2020.
  10. D. Lan et al., “A Deep Reinforcement Learning-Based Congestion Control Mechanism for NDN,” in IEEE ICC 2019, 2019.
  11. S. Emara, B. Li, and Y. Chen, “Eagle: Refining Congestion Control by Learning from the Experts,” in IEEE INFOCOM 2020, 2020.
  12. M. Roshdi, S. Bhadauria, K. Hassan, and G. Fischer, “Deep Reinforcement Learning based Congestion Control for V2X Communication,” in IEEE PIMRC 2021, 2021.
  13. M. Chen, R. Li, Z. Zhao, and H. Zhang, “RAN Information-assisted TCP Congestion Control via DRL with Reward Redistribution,” in IEEE ICC 2021 Workshops, 2021.
  14. S. Emara, F. Wang, B. Li, and T. Zeyl, “Pareto: Fair congestion control with online reinforcement learning,” IEEE Trans. Netw. Sci. Eng., vol. 9, no. 5, pp. 3731–3748, Sept.-Oct. 2022.
  15. P. Almasan et al., “Deep reinforcement learning meets graph neural networks: Exploring a routing optimization use case,” Computer Communications, vol. 196, pp. 184–194, 2022.
  16. S. S. Bhavanasi, L. Pappone, and F. Esposito, “Routing with graph convolutional networks and multi-agent deep reinforcement learning,” in 2022 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN).   IEEE, 2022, pp. 72–77.
  17. L. Yang, Y. Wei, F. R. Yu, and Z. Han, “Joint routing and scheduling optimization in time-sensitive networks using graph-convolutional-network-based deep reinforcement learning,” IEEE Internet of Things Journal, vol. 9, no. 23, pp. 23 981–23 994, 2022.
  18. R. Huang, W. Guan, G. Zhai, J. He, and X. Chu, “Deep graph reinforcement learning based intelligent traffic routing control for software-defined wireless sensor networks,” Applied Sciences, vol. 12, no. 4, p. 1951, 2022.
  19. M. Zhang et al., “An End-to-End Deep Learning Architecture for Graph Classification,” in AAAI-18, New Orleans, USA, 2018.
  20. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing Atari with Deep Reinforcement Learning,” in NIPS Deep Learning Workshop 2013, 2013.
  21. G. Brockman et al., “OpenAI Gym,” arXiv, 2016.
  22. M. Abadi et al., “TensorFlow: A System for Large-Scale Machine Learning,” in USENIX OSDI 2016, 2016.
  23. CSIRO’s Data61, “Stellargraph machine learning library,” 2018. [Online]. Available: https://github.com/stellargraph/stellargraph
  24. A. A. Hagberg, D. A. Schult, and P. J. Swart, “Exploring network structure, dynamics, and function using networkx,” in Proc. 7th Python in Science Conf., G. Varoquaux, T. Vaught, and K. J. Millman, Eds., 2008.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com