Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UniTSA: A Universal Reinforcement Learning Framework for V2X Traffic Signal Control (2312.05090v1)

Published 8 Dec 2023 in eess.SY, cs.LG, and cs.SY

Abstract: Traffic congestion is a persistent problem in urban areas, which calls for the development of effective traffic signal control (TSC) systems. While existing Reinforcement Learning (RL)-based methods have shown promising performance in optimizing TSC, it is challenging to generalize these methods across intersections of different structures. In this work, a universal RL-based TSC framework is proposed for Vehicle-to-Everything (V2X) environments. The proposed framework introduces a novel agent design that incorporates a junction matrix to characterize intersection states, making the proposed model applicable to diverse intersections. To equip the proposed RL-based framework with enhanced capability of handling various intersection structures, novel traffic state augmentation methods are tailor-made for signal light control systems. Finally, extensive experimental results derived from multiple intersection configurations confirm the effectiveness of the proposed framework. The source code in this work is available at https://github.com/wmn7/Universal_Light

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Evaluation of the impact of traffic congestion based on sumo. In 2019 25th International Conference on Automation and Computing (ICAC), pages 1–5. IEEE, 2019.
  2. Alan J Miller. Settings for fixed-cycle traffic signals. Journal of the Operational Research Society, 14(4):373–386, 1963.
  3. Signal timing manual, volume 1. Transportation Research Board Washington, DC, 2015.
  4. Carlos Gershenson. Self-organizing traffic lights. arXiv preprint nlin/0411066, 2004.
  5. Traffic signal control methods: Current status, challenges, and emerging trends. Proceedings of Data Analytics and Management: ICDAM 2021, Volume 1, pages 151–163, 2022.
  6. Artificial intelligence for vehicle-to-everything: A survey. IEEE Access, 7:10823–10843, 2019.
  7. Spat/map v2x communication between traffic light and vehicles and a realization with digital twin. Computers and Electrical Engineering, 106:108560, 2023.
  8. Exploring q-learning optimization in traffic signal timing plan management. In 2011 third international conference on computational intelligence, communication systems and networks, pages 269–274. IEEE, 2011.
  9. Learning phase competition for traffic signal control. In Proceedings of the 28th ACM international conference on information and knowledge management, pages 1963–1972, 2019.
  10. Toward a thousand lights: Decentralized deep reinforcement learning for large-scale traffic signal control. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3414–3421, 2020.
  11. Metalight: Value-based meta-reinforcement learning for traffic signal control. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1153–1160, 2020.
  12. Oam: An option-action reinforcement learning framework for universal multi-intersection control. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 4550–4558, 2022.
  13. A novel reinforcement learning-based cooperative traffic signal system through max-pressure control. IEEE Transactions on Vehicular Technology, 71(2):1187–1198, 2022.
  14. Expression might be enough: representing pressure and demand for reinforcement learning based traffic signal control. In International Conference on Machine Learning, pages 26645–26654. PMLR, 2022.
  15. Learning traffic signal control from demonstrations. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2289–2292, 2019.
  16. Time critic policy gradient methods for traffic signal control in complex and congested scenarios. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’19, page 1654–1664, New York, NY, USA, 2019. Association for Computing Machinery.
  17. Multi-agent deep reinforcement learning for large-scale traffic signal control. IEEE Transactions on Intelligent Transportation Systems, 21(3):1086–1095, 2019.
  18. Attendlight: Universal attention-based reinforcement learning model for traffic signal control. Advances in Neural Information Processing Systems, 33:4079–4090, 2020.
  19. Adaptive coordinated traffic control for arterial intersections based on reinforcement learning. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pages 2562–2567. IEEE, 2021.
  20. Traffic light control using deep policy-gradient and value-function-based reinforcement learning. IET Intelligent Transport Systems, 11(7):417–423, 2017.
  21. Developing adaptive traffic signal control by actor–critic and direct exploration methods. In Proceedings of the Institution of Civil Engineers-Transport, volume 172, pages 289–298. Thomas Telford Ltd, 2019.
  22. Emvlight: A multi-agent reinforcement learning framework for an emergency vehicle decentralized routing and traffic signal control system. Transportation Research Part C: Emerging Technologies, 146:103955, 2023.
  23. Elise Van der Pol and Frans A Oliehoek. Coordinated deep reinforcement learners for traffic light control. Proceedings of learning, inference and control of multi-agent systems (at NIPS 2016), 8:21–38, 2016.
  24. An experimental review of reinforcement learning algorithms for adaptive traffic signal control. Autonomic road transport support systems, pages 47–66, 2016.
  25. Intellilight: A reinforcement learning approach for intelligent traffic light control. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2496–2505, 2018.
  26. The study of reinforcement learning for traffic self-adaptive control under multiagent markov game environment. Mathematical Problems in Engineering, 2013, 2013.
  27. Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events. Transportation Research Part C: Emerging Technologies, 85:732–752, 2017.
  28. Traffic signal optimization through discrete and continuous reinforcement learning with robustness analysis in downtown tehran. Advanced Engineering Informatics, 38:639–655, 2018.
  29. Road state inference via channel state information. IEEE Transactions on Vehicular Technology, pages 1–14, 2023.
  30. Integration large-scale modeling framework of direct cellular vehicle-to-all (c-v2x) applications. Sensors, 21(6):2127, 2021.
  31. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
  32. Microscopic traffic simulation using sumo. In 2018 21st international conference on intelligent transportation systems (ITSC), pages 2575–2582. IEEE, 2018.
  33. Traffic signal timing manual. Technical report, United States. Federal Highway Administration, 2008.
  34. The sydney coordinated adaptive traffic (scat) system philosophy and benefits. IEEE Transactions on vehicular technology, 29(2):130–137, 1980.
  35. Pravin Varaiya. The max-pressure controller for arbitrary networks of signalized intersections. Advances in dynamic network modeling in complex transportation systems, pages 27–66, 2013.
  36. A survey on traffic signal control methods. arXiv preprint arXiv:1904.08117, 2019.
  37. Meta-learning in neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(9):5149–5169, 2021.
  38. Reinforcement learning with augmented data. Advances in neural information processing systems, 33:19884–19895, 2020.
  39. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. arXiv preprint arXiv:2004.13649, 2020.
  40. Generalization in reinforcement learning by soft data augmentation. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 13611–13617. IEEE, 2021.
  41. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
  42. Stable-baselines3: Reliable reinforcement learning implementations. The Journal of Machine Learning Research, 22(1):12348–12355, 2021.
  43. Reinforcement learning benchmarks for traffic signal control. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021.
  44. Self-organizing traffic lights: A realistic simulation. Advances in applied self-organizing systems, pages 45–55, 2013.
  45. Presslight: Learning max pressure control to coordinate traffic signals in arterial network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1290–1298, 2019.
Citations (5)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets