Evaluating Roadside Perception for Autonomous Vehicles: Insights from Field Testing (2401.12392v1)
Abstract: Roadside perception systems are increasingly crucial in enhancing traffic safety and facilitating cooperative driving for autonomous vehicles. Despite rapid technological advancements, a major challenge persists for this newly arising field: the absence of standardized evaluation methods and benchmarks for these systems. This limitation hampers the ability to effectively assess and compare the performance of different systems, thus constraining progress in this vital field. This paper introduces a comprehensive evaluation methodology specifically designed to assess the performance of roadside perception systems. Our methodology encompasses measurement techniques, metric selection, and experimental trial design, all grounded in real-world field testing to ensure the practical applicability of our approach. We applied our methodology in Mcity\footnote{\url{https://mcity.umich.edu/}}, a controlled testing environment, to evaluate various off-the-shelf perception systems. This approach allowed for an in-depth comparative analysis of their performance in realistic scenarios, offering key insights into their respective strengths and limitations. The findings of this study are poised to inform the development of industry-standard benchmarks and evaluation methods, thereby enhancing the effectiveness of roadside perception system development and deployment for autonomous vehicles. We anticipate that this paper will stimulate essential discourse on standardizing evaluation methods for roadside perception systems, thus pushing the frontiers of this technology. Furthermore, our results offer both academia and industry a comprehensive understanding of the capabilities of contemporary infrastructure-based perception systems.
- “Smart road-roadside perception industry report, 2021.” https://www.globenewswire.com/news-release/2021/06/25/2253283/0/en/Smart-Road-Roadside-Perception-Industry-Report-2021.html, 2021. Accessed: 2023-02-03.
- “China smart-road roadside perception industry report 2022.” https://www.globenewswire.com/en/news-release/2022/10/03/2526871/28124/en/China-Smart-Road-Roadside-Perception-Industry-Report-2022-The-Four-Tech-Tycoons-Huawei-Baidu-Alibaba-and-Tencent-HBAT-have-All-Entered-the-Smart-Road-Roadside-Perception-Market.html, 2022. Accessed: 2023-07-06.
- Derq, “Derq: Ai for safer and smarter roads.” https://www.derq.com/, 2020. Accessed: 2023-02-03.
- Ouster, “Ouster: The leading provider of digital lidar sensors.” https://ouster.com/, 2020. Accessed: 2023-02-03.
- University of Michigan, “Mcity: University of Michigan.” https://mcity.umich.edu/, 2023. Accessed: July 30, 2023.
- M. Tsukada, T. Oi, M. Kitazawa, and H. Esaki, “Networked roadside perception units for autonomous driving,” Sensors, vol. 20, no. 18, p. 5320, 2020.
- R. Zhang, Z. Zou, S. Shen, and H. X. Liu, “Design, implementation, and evaluation of a roadside cooperative perception system,” Transportation research record, vol. 2676, no. 11, pp. 273–284, 2022.
- Z. Gong, Z. Wang, B. Zhou, W. Liu, and P. Liu, “Pedestrian detection method based on roadside light detection and ranging,” SAE International Journal of Connected and Automated Vehicles, vol. 4, no. 12-04-04-0031, pp. 413–422, 2021.
- R. Zhang, D. Meng, L. Bassett, S. Shen, Z. Zou, and H. X. Liu, “Robust roadside perception for autonomous driving: an annotation-free strategy with synthesized data,” arXiv preprint arXiv:2306.17302, 2020.
- Z. Zou, R. Zhang, S. Shen, G. Pandey, P. Chakravarty, A. Parchami, and H. X. Liu, “Real-time full-stack traffic scene perception for autonomous driving with roadside cameras,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 890–896, IEEE, 2022.
- Y. Han, H. Zhang, H. Li, Y. Jin, C. Lang, and Y. Li, “Collaborative perception in autonomous driving: Methods, datasets and challenges,” arXiv preprint arXiv:2301.06262, 2023.
- S. Su, Y. Li, S. He, S. Han, C. Feng, C. Ding, and F. Miao, “Uncertainty quantification of collaborative detection for self-driving,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 5588–5594, IEEE, 2023.
- Z. Lei, S. Ren, Y. Hu, W. Zhang, and S. Chen, “Latency-aware collaborative perception,” in European Conference on Computer Vision, pp. 316–332, Springer, 2022.
- N. Vadivelu, M. Ren, J. Tu, J. Wang, and R. Urtasun, “Learning to communicate and correct pose errors,” in Conference on Robot Learning, pp. 1195–1210, PMLR, 2021.
- A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
- H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, and Y. Pan, “nuScenes: A multimodal dataset for autonomous driving,” arXiv preprint arXiv:1903.11027, 2019.
- P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, et al., “Scalability in perception for autonomous driving: Waymo open dataset,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2446–2454, 2020.
- A. Milan, L. Leal-Taixé, I. Reid, S. Roth, and K. Schindler, “Mot16: A benchmark for multi-object tracking,” arXiv preprint arXiv:1603.00831, 2016.
- P. Dendorfer, H. Rezatofighi, A. Milan, J. Shi, D. Cremers, I. Reid, S. Roth, K. Schindler, and L. Leal-Taixé, “Mot20: A benchmark for multi object tracking in crowded scenes,” arXiv preprint arXiv:2003.09003, 2020.
- A. Dave, T. Khurana, P. Tokmakov, C. Schmid, and D. Ramanan, “Tao: A large-scale benchmark for tracking any object,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16, pp. 436–454, Springer, 2020.
- H. W. Kuhn, “The hungarian method for the assignment problem,” Naval research logistics quarterly, vol. 2, no. 1-2, pp. 83–97, 1955.
- S. International, “On-board system requirements for v2v safety communications,” SAE J2945/1, 2016.
- E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi, “Performance measures and a data set for multi-target, multi-camera tracking,” in European conference on computer vision, pp. 17–35, Springer, 2016.
- J. Luiten, A. Osep, P. Dendorfer, P. Torr, A. Geiger, L. Leal-Taixé, and B. Leibe, “Hota: A higher order metric for evaluating multi-object tracking,” International journal of computer vision, vol. 129, pp. 548–578, 2021.
- M. T. Lab, “Roadside perception sysmtem evaluation.” https://github.com/michigan-traffic-lab/perception_evaluation, 2023.