Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DVS-RG: Differential Variable Speed Limits Control using Deep Reinforcement Learning with Graph State Representation (2405.09163v1)

Published 15 May 2024 in eess.SY and cs.SY

Abstract: Variable speed limit (VSL) control is an established yet challenging problem to improve freeway traffic mobility and alleviate bottlenecks by customizing speed limits at proper locations based on traffic conditions. Recent advances in deep reinforcement learning (DRL) have shown promising results in solving VSL control problems by interacting with sophisticated environments. However, the modeling of these methods ignores the inherent graph structure of the traffic state which can be a key factor for more efficient VSL control. Graph structure can not only capture the static spatial feature but also the dynamic temporal features of traffic. Therefore, we propose the DVS-RG: DRL-based differential variable speed limit controller with graph state representation. DVS-RG provides distinct speed limits per lane in different locations dynamically. The road network topology and traffic information(e.g., occupancy, speed) are integrated as the state space of DVS-RG so that the spatial features can be learned. The normalization reward which combines efficiency and safety is used to train the VSL controller to avoid excessive inefficiencies or low safety. The results obtained from the simulation study on SUMO show that DRL-RG achieves higher traffic efficiency (the average waiting time reduced to 68.44\%) and improves the safety measures (the number of potential collision reduced by 15.93\% ) compared to state-of-the-art DRL methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. D. Chen, S. Ahn, and A. Hegyi, “Variable speed limit control for steady and oscillatory queues at fixed freeway bottlenecks,” Transportation Research Part B: Methodological, vol. 70, pp. 340–358, 2014.
  2. C. Wang, Y. Xu, J. Zhang, and B. Ran, “Integrated Traffic Control for Freeway Recurrent Bottleneck Based on Deep Reinforcement Learning,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 9, pp. 15 522–15 535, 2022.
  3. Y. Wu, H. Tan, L. Qin, and B. Ran, “Differential variable speed limits control for freeway recurrent bottlenecks via deep actor-critic algorithm,” Transportation Research Part C: Emerging Technologies, vol. 117, p. 102649, 2020.
  4. Y. Wang, X. Yu, S. Zhang, P. Zheng, J. Guo, L. Zhang, S. Hu, S. Cheng, and H. Wei, “Freeway Traffic Control in Presence of Capacity Drop,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 3, pp. 1497–1516, 2021.
  5. W. Lu, Z. Yi, Y. Gu, Y. Rui, and B. Ran, “TD3LVSL: A lane-level variable speed limit approach based on twin delayed deep deterministic policy gradient in a connected automated vehicle environment,” Transportation Research Part C: Emerging Technologies, vol. 153, p. 104221, 2023.
  6. T. Yuan, F. Alasiri, and P. A. Ioannou, “Selection of the Speed Command Distance for Improved Performance of a Rule-Based VSL and Lane Change Control,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 10, pp. 19 348–19 357, 2022.
  7. B. Khondaker and L. Kattan, “Variable speed limit: An overview,” Transportation Letters, vol. 7, no. 5, pp. 264–278, 2015.
  8. S. Weikl, K. Bogenberger, and R. L. Bertini, “Traffic management effects of variable speed limit system on a German Autobahn: Empirical assessment before and after system implementation,” Transportation research record, vol. 2380, no. 1, pp. 48–60, 2013.
  9. H.-Y. Jin and W.-L. Jin, “Control of a lane-drop bottleneck through variable speed limits,” Transportation Research Part C: Emerging Technologies, vol. 58, pp. 568–584, 2015.
  10. S. Du and S. Razavi, “Variable Speed Limit for Freeway Work Zone with Capacity Drop Using Discrete-Time Sliding Mode Control,” Journal of Computing in Civil Engineering, vol. 33, no. 2, p. 04019001, 2019.
  11. I. Karafyllis and M. Papageorgiou, “Feedback control of scalar conservation laws with application to density control in freeways by means of variable speed limits,” Automatica, vol. 105, pp. 228–236, 2019.
  12. R. C. Carlson, I. Papamichail, and M. Papageorgiou, “Local Feedback-Based Mainstream Traffic Flow Control on Motorways Using Variable Speed Limits,” IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 4, pp. 1261–1276, 2011.
  13. E. R. Müller, R. C. Carlson, W. Kraus, and M. Papageorgiou, “Microsimulation Analysis of Practical Aspects of Traffic Control With Variable Speed Limits,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 1, pp. 512–523, 2015.
  14. P. Mao, X. Ji, X. Qu, L. Li, and B. Ran, “A Variable Speed Limit Control Based on Variable Cell Transmission Model in the Connecting Traffic Environment,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 10, pp. 17 632–17 643, 2022.
  15. Y. Guo, H. Xu, Y. Zhang, and D. Yao, “Integrated Variable Speed Limits and Lane-Changing Control for Freeway Lane-Drop Bottlenecks,” IEEE Access, vol. 8, pp. 54 710–54 721, 2020.
  16. Y. Han, M. Wang, Z. He, Z. Li, H. Wang, and P. Liu, “A linear Lagrangian model predictive controller of macro- and micro- variable speed limits to eliminate freeway jam waves,” Transportation Research Part C: Emerging Technologies, vol. 128, p. 103121, 2021.
  17. Y. Ye, D. Qiu, X. Wu, G. Strbac, and J. Ward, “Model-Free Real-Time Autonomous Control for a Residential Multi-Energy System Using Deep Reinforcement Learning,” IEEE Transactions on Smart Grid, vol. 11, no. 4, pp. 3068–3082, 2020.
  18. J. Guo, L. Cheng, and S. Wang, “CoTV: Cooperative Control for Traffic Light Signals and Connected Autonomous Vehicles Using Deep Reinforcement Learning,” IEEE Transactions on Intelligent Transportation Systems, pp. 1–12, 2023.
  19. J. Yang, P. Wang, W. Yuan, Y. Ju, W. Han, and J. Zhao, “Automatic generation of optimal road trajectory for the rescue vehicle in case of emergency on mountain freeway using reinforcement learning approach,” IET Intelligent Transport Systems, vol. 15, no. 9, pp. 1142–1152, 2021.
  20. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016.
  21. Z. Bai, P. Hao, W. ShangGuan, B. Cai, and M. J. Barth, “Hybrid Reinforcement Learning-Based Eco-Driving Strategy for Connected and Automated Vehicles at Signalized Intersections,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 9, pp. 15 850–15 863, 2022.
  22. K. Kušić, E. Ivanjko, M. Gregurić, and M. Miletić, “An Overview of Reinforcement Learning Methods for Variable Speed Limit Control,” Applied Sciences, vol. 10, no. 14, p. 4917, 2020.
  23. E. Walraven, M. T. J. Spaan, and B. Bakker, “Traffic flow optimization: A reinforcement learning approach,” Engineering Applications of Artificial Intelligence, vol. 52, pp. 203–212, 2016.
  24. Z. Li, P. Liu, C. Xu, H. Duan, and W. Wang, “Reinforcement Learning-Based Variable Speed Limit Control Strategy to Reduce Traffic Congestion at Freeway Recurrent Bottlenecks,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 11, pp. 3204–3217, 2017.
  25. C. Wang, J. Zhang, L. Xu, L. Li, and B. Ran, “A New Solution for Freeway Congestion: Cooperative Speed Limit Control Using Distributed Reinforcement Learning,” IEEE Access, vol. 7, pp. 41 947–41 957, 2019.
  26. Y. Han, A. Hegyi, L. Zhang, Z. He, E. Chung, and P. Liu, “A new reinforcement learning-based variable speed limit control approach to improve traffic efficiency against freeway jam waves,” Transportation Research Part C: Emerging Technologies, vol. 144, p. 103900, 2022.
  27. K. Kušić, E. Ivanjko, F. Vrbanić, M. Gregurić, and I. Dusparic, “Spatial-Temporal Traffic Flow Control on Motorways Using Distributed Multi-Agent Reinforcement Learning,” Mathematics, vol. 9, no. 23, p. 3081, 2021.
  28. C. Peng and C. Xu, “Combined variable speed limit and lane change guidance for secondary crash prevention using distributed deep reinforcement learning,” Journal of Transportation Safety & Security, vol. 14, no. 12, pp. 2166–2191, 2022.
  29. F.-X. Devailly, D. Larocque, and L. Charlin, “IG-RL: Inductive Graph Reinforcement Learning for Massive-Scale Traffic Signal Control,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 7496–7507, 2022.
  30. S. Chen, J. Dong, P. Y. J. Ha, Y. Li, and S. Labi, “Graph neural network and reinforcement learning for multi-agent cooperative control of connected autonomous vehicles,” Computer-Aided Civil and Infrastructure Engineering, vol. 36, no. 7, pp. 838–857, 2021.
  31. J. Yoon, K. Ahn, J. Park, and H. Yeo, “Transferable traffic signal control: Reinforcement learning with graph centric state representation,” Transportation Research Part C: Emerging Technologies, vol. 130, p. 103321, 2021.
  32. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal Policy Optimization Algorithms,” 2017.

Summary

We haven't generated a summary for this paper yet.