Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mixed Traffic Control and Coordination from Pixels (2302.09167v4)

Published 17 Feb 2023 in cs.MA, cs.LG, and cs.RO

Abstract: Traffic congestion is a persistent problem in our society. Previous methods for traffic control have proven futile in alleviating current congestion levels leading researchers to explore ideas with robot vehicles given the increased emergence of vehicles with different levels of autonomy on our roads. This gives rise to mixed traffic control, where robot vehicles regulate human-driven vehicles through reinforcement learning (RL). However, most existing studies use precise observations that require domain expertise and hand engineering for each road network's observation space. Additionally, precise observations use global information, such as environment outflow, and local information, i.e., vehicle positions and velocities. Obtaining this information requires updating existing road infrastructure with vast sensor environments and communication to potentially unwilling human drivers. We consider image observations, a modality that has not been extensively explored for mixed traffic control via RL, as the alternative: 1) images do not require a complete re-imagination of the observation space from environment to environment; 2) images are ubiquitous through satellite imagery, in-car camera systems, and traffic monitoring systems; and 3) images only require communication to equipment. In this work, we show robot vehicles using image observations can achieve competitive performance to using precise information on environments, including ring, figure eight, intersection, merge, and bottleneck. In certain scenarios, our approach even outperforms using precision observations, e.g., up to 8% increase in average vehicle velocity in the merge environment, despite only using local traffic information as opposed to global traffic information.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. P. Goodwin, “The economic costs of road traffic congestion,” 2004.
  2. R. Arnott and K. Small, “The economics of traffic congestion,” American scientist, vol. 82, no. 5, pp. 446–455, 1994.
  3. C. Wu, A. R. Kreidieh, K. Parvate, E. Vinitsky, and A. M. Bayen, “Flow: A modular learning framework for mixed autonomy traffic,” IEEE Transactions on Robotics, vol. 38, no. 2, pp. 1270–1286, 2021.
  4. E. Vinitsky, A. Kreidieh, L. Le Flem, N. Kheterpal, K. Jang, C. Wu, F. Wu, R. Liaw, E. Liang, and A. M. Bayen, “Benchmarks for reinforcement learning in mixed-autonomy traffic,” in Conference on robot learning.   PMLR, 2018, pp. 399–409.
  5. Z. Yan and C. Wu, “Reinforcement learning for mixed autonomy intersections,” in 2021 IEEE International Intelligent Transportation Systems Conference (ITSC).   IEEE, 2021, pp. 2089–2094.
  6. D. Wang, W. Li, L. Zhu, and J. Pan, “Learning to control and coordinate mixed traffic through robot vehicles at complex and unsignalized intersections,” arXiv preprint arXiv:2301.05294, 2023.
  7. M. Villarreal, D. Wang, J. Pan, and W. Li, “Analyzing emissions and energy efficiency in mixed traffic control at unsignalized intersections,” in IEEE Forum for Innovative Sustainable Transportation Systems (FISTS), 2024.
  8. D. Wang, W. Li, and J. Pan, “Large-scale mixed traffic control using dynamic vehicle routing and privacy-preserving crowdsourcing,” IEEE Internet of Things Journal, vol. 11, no. 2, pp. 1981–1989, 2024.
  9. F.-C. Chou, A. R. Bagabaldo, and A. M. Bayen, “The lord of the ring road: a review and evaluation of autonomous control policies for traffic in a ring road,” ACM Transactions on Cyber-Physical Systems (TCPS), vol. 6, no. 1, pp. 1–25, 2022.
  10. C. Wu, A. Kreidieh, E. Vinitsky, and A. M. Bayen, “Emergent behaviors in mixed-autonomy traffic,” in Conference on Robot Learning.   PMLR, 2017, pp. 398–407.
  11. E. Vinitsky, K. Parvate, A. Kreidieh, C. Wu, and A. Bayen, “Lagrangian control through deep-rl: Applications to bottleneck decongestion,” in 2018 21st International Conference on Intelligent Transportation Systems (ITSC).   IEEE, 2018, pp. 759–765.
  12. A. R. Kreidieh, C. Wu, and A. M. Bayen, “Dissipating stop-and-go waves in closed and open networks via deep reinforcement learning,” in 2018 21st International Conference on Intelligent Transportation Systems (ITSC).   IEEE, 2018, pp. 1475–1480.
  13. M. Villarreal, B. Poudel, and W. Li, “Can chatgpt enable its? the case of mixed traffic control via reinforcement learning,” in IEEE International Conference on Intelligent Transportation Systems (ITSC), 2023.
  14. S. Sinha, A. Mandlekar, and A. Garg, “S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics,” in Conference on Robot Learning.   PMLR, 2022, pp. 907–917.
  15. Y. Zhu, A. Joshi, P. Stone, and Y. Zhu, “Viola: Object-centric imitation learning for vision-based robot manipulation,” in Conference on Robot Learning.   PMLR, 2023, pp. 1199–1210.
  16. D. Shah, B. Osiński, S. Levine, et al., “Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action,” in Conference on Robot Learning.   PMLR, 2023, pp. 492–504.
  17. W. Yu, D. Jain, A. Escontrela, A. Iscen, P. Xu, E. Coumans, S. Ha, J. Tan, and T. Zhang, “Visual-locomotion: Learning to walk on complex terrains with vision,” in 5th Annual Conference on Robot Learning, 2021.
  18. S. o. C. Department of Transportation, “Caltrans pems,” http://pems.dot.ca.gov/, 2022.
  19. C. S. ATMS, “Advanced traffic management system,” https://coloradosprings.gov/traffic-and-transportation-engineering/page/traffic-management/, 2022.
  20. Z. Li, W. Wang, H. Li, E. Xie, C. Sima, T. Lu, Y. Qiao, and J. Dai, “Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers,” in European conference on computer vision.   Springer, 2022, pp. 1–18.
  21. E. Xie, Z. Yu, D. Zhou, J. Philion, A. Anandkumar, S. Fidler, P. Luo, and J. M. Alvarez, “M^ 2bev: Multi-camera joint 3d detection and segmentation with unified birds-eye view representation,” arXiv preprint arXiv:2204.05088, 2022.
  22. Y. Zhang, Z. Zhu, W. Zheng, J. Huang, G. Huang, J. Zhou, and J. Lu, “Beverse: Unified perception and prediction in birds-eye-view for vision-centric autonomous driving,” arXiv preprint arXiv:2205.09743, 2022.
  23. B. Huang, Y. Li, E. Xie, F. Liang, L. Wang, M. Shen, F. Liu, T. Wang, P. Luo, and J. Shao, “Fast-bev: Towards real-time on-vehicle bird’s-eye view perception,” arXiv preprint arXiv:2301.07870, 2023.
  24. Y. Shen, W. Li, and M. C. Lin, “Inverse reinforcement learning with hybrid-weight trust-region optimization and curriculum learning for autonomous maneuvering,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 7421–7428.
  25. B. Poudel, T. Watson, and W. Li, “Learning to control dc motor for micromobility in real time with reinforcement learning,” in IEEE International Conference on Intelligent Transportation Systems (ITSC), 2022, pp. 1248–1254.
  26. E. Kargar and V. Kyrki, “Vision transformer for learning driving policies in complex multi-agent environments,” arXiv preprint arXiv:2109.06514, 2021.
  27. A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “Carla: An open urban driving simulator,” in Conference on robot learning.   PMLR, 2017, pp. 1–16.
  28. P. Cai, S. Wang, H. Wang, and M. Liu, “Carl-lead: Lidar-based end-to-end autonomous driving with contrastive deep reinforcement learning,” arXiv preprint arXiv:2109.08473, 2021.
  29. Ó. Pérez-Gil, R. Barea, E. López-Guillén, L. M. Bergasa, C. Gomez-Huelamo, R. Gutiérrez, and A. Diaz-Diaz, “Deep reinforcement learning based control for autonomous vehicles in carla,” Multimedia Tools and Applications, vol. 81, no. 3, pp. 3553–3576, 2022.
  30. Z. Zhang, A. Liniger, D. Dai, F. Yu, and L. Van Gool, “End-to-end urban driving by imitating a reinforcement learning coach,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 15 222–15 232.
  31. Z. Cao, E. Bıyık, W. Z. Wang, A. Raventos, A. Gaidon, G. Rosman, and D. Sadigh, “Reinforcement learning based control of imitative policies for near-accident driving,” arXiv preprint arXiv:2007.00178, 2020.
  32. H. Maske, T. Chu, and U. Kalabić, “Large-scale traffic control using autonomous vehicles and decentralized deep reinforcement learning,” in 2019 IEEE Intelligent Transportation Systems Conference (ITSC).   IEEE, 2019, pp. 3816–3821.
  33. F. Wu and A. M. Bayen, “Cscrs road safety fellowship report: A human-machine collaborative acceleration controller attained from pixel learning and evolution strategies.”
  34. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  35. M. Saberi and H. S. Mahmassani, “Empirical characterization and interpretation of hysteresis and capacity drop phenomena in freeway networks,” Transportation Research Record: Journal of the Transportation Research Board, Transportation Research Board of the National Academies, Washington, DC, 2013.
  36. E. Liang, R. Liaw, R. Nishihara, P. Moritz, R. Fox, K. Goldberg, J. Gonzalez, M. Jordan, and I. Stoica, “Rllib: Abstractions for distributed reinforcement learning,” in International Conference on Machine Learning.   PMLR, 2018, pp. 3053–3062.
  37. M. Treiber, A. Hennecke, and D. Helbing, “Congested traffic states in empirical observations and microscopic simulations,” Physical review E, vol. 62, no. 2, p. 1805, 2000.
  38. B. Al-Hayani and H. Ilhan, “Efficient cooperative image transmission in one-way multi-hop sensor network,” The International Journal of Electrical Engineering & Education, vol. 57, no. 4, pp. 321–339, 2020.
  39. S. M. Aziz and D. M. Pham, “Energy efficient image transmission in wireless multimedia sensor networks,” IEEE communications letters, vol. 17, no. 6, pp. 1084–1087, 2013.
  40. D. K. Sonal, “A study of various image compression techniques,” COIT, RIMT-IET. Hisar, vol. 8, pp. 97–102, 2007.
  41. Y. Choi, M. El-Khamy, and J. Lee, “Variable rate deep image compression with a conditional autoencoder,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 3146–3154.
  42. L. Zhou, C. Cai, Y. Gao, S. Su, and J. Wu, “Variational autoencoder for low bit-rate image compression,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 2617–2620.
  43. W. Li, D. Wolinski, and M. C. Lin, “City-scale traffic animation using statistical learning and metamodel-based optimization,” ACM Trans. Graph., vol. 36, no. 6, pp. 200:1–200:12, 2017.
  44. K. Guo, W. Jing, L. Gao, W. Liu, W. Li, and J. Pan, “Long-term microscopic traffic simulation with history-masked multi-agent imitation learning,” arXiv preprint arXiv:2306.06401, 2023.
  45. L. Lin, W. Li, and S. Peeta, “Efficient data collection and accurate travel time estimation in a connected vehicle environment via real-time compressive sensing,” Journal of Big Data Analytics in Transportation, vol. 1, no. 2, pp. 95–107, 2019.
  46. Y. Shen, L. Zheng, M. Shu, W. Li, T. Goldstein, and M. C. Lin, “Gradient-free adversarial training against image corruption for learning-based steering,” in Advances in Neural Information Processing Systems (NeurIPS), 2021, pp. 26 250–26 263.
Citations (8)

Summary

We haven't generated a summary for this paper yet.