Robust Roadside Perception: an Automated Data Synthesis Pipeline Minimizing Human Annotation (2306.17302v2)
Abstract: Recently, advancements in vehicle-to-infrastructure communication technologies have elevated the significance of infrastructure-based roadside perception systems for cooperative driving. This paper delves into one of its most pivotal challenges: data insufficiency. The lacking of high-quality labeled roadside sensor data with high diversity leads to low robustness, and low transfer-ability of current roadside perception systems. In this paper, a novel solution is proposed to address this problem that creates synthesized training data using Augmented Reality. A Generative Adversarial Network is then applied to enhance the reality further, that produces a photo-realistic synthesized dataset that is capable of training or fine-tuning a roadside perception detector which is robust to different weather and lighting conditions. Our approach was rigorously tested at two key intersections in Michigan, USA: the Mcity intersection and the State St./Ellsworth Rd roundabout. The Mcity intersection is located within the Mcity test field, a controlled testing environment. In contrast, the State St./Ellsworth Rd intersection is a bustling roundabout notorious for its high traffic flow and a significant number of accidents annually. Experimental results demonstrate that detectors trained solely on synthesized data exhibit commendable performance across all conditions. Furthermore, when integrated with labeled data, the synthesized data can notably bolster the performance of pre-existing detectors, especially in adverse conditions.
- S. Draft, “J2735 dedicated short range communications (dsrc) message set dictionary,” Rev 0.7, Jam, 2006.
- J. B. Kenney, “Dedicated short-range communications (dsrc) standards in the united states,” Proceedings of the IEEE, vol. 99, no. 7, pp. 1162–1182, 2011.
- S. A. A. T. Committee et al., “V2x sensor-sharing for cooperative & automated driving,” SAE J3224. Available online: https://www. sae. org/servlets/works/committeeHome. do, 2019.
- M. Billinghurst, A. Clark, G. Lee, et al., “A survey of augmented reality,” Foundations and Trends® in Human–Computer Interaction, vol. 8, no. 2-3, pp. 73–272, 2015.
- A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, “Generative adversarial networks: An overview,” IEEE signal processing magazine, vol. 35, no. 1, pp. 53–65, 2018.
- S. R. E. Datondji, Y. Dupuis, P. Subirats, and P. Vasseur, “A survey of vision-based traffic monitoring of road intersections,” IEEE transactions on intelligent transportation systems, vol. 17, no. 10, pp. 2681–2698, 2016.
- T. Furuya and C. J. Taylor, “Road intersection monitoring from video with large perspective deformation,” Ph.D. dissertation, University of Pennsylvania, 2014.
- S. Messelodi, C. M. Modena, and M. Zanin, “A computer vision system for the detection and classification of vehicles at urban road intersections,” Pattern analysis and applications, vol. 8, no. 1, pp. 17–31, 2005.
- N. Saunier and T. Sayed, “A feature-based tracking algorithm for vehicles in intersections,” in The 3rd Canadian Conference on Computer and Robot Vision (CRV’06). IEEE, 2006, pp. 59–59.
- C. Li, A. Chiang, G. Dobler, Y. Wang, K. Xie, K. Ozbay, M. Ghandehari, J. Zhou, and D. Wang, “Robust vehicle tracking for urban traffic videos at intersections,” in 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2016, pp. 207–213.
- F. Faisal, S. K. Das, A. H. Siddique, M. Hasan, S. Sabrin, C. A. Hossain, and Z. Tong, “Automated traffic detection system based on image processing,” Journal of Computer Science and Technology Studies, vol. 2, no. 1, pp. 18–25, 2020.
- A. Aboah, “A vision-based system for traffic anomaly detection using deep learning and decision trees,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4207–4212.
- R. Zhang, Z. Zou, S. Shen, and H. X. Liu, “Design, implementation, and evaluation of a roadside cooperative perception system,” Transportation Research Record, p. 03611981221092402, 2022.
- Z. Zou, R. Zhang, S. Shen, G. Pandey, P. Chakravarty, A. Parchami, and H. X. Liu, “Real-time full-stack traffic scene perception for autonomous driving with roadside cameras,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 890–896.
- A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “Carla: An open urban driving simulator,” in Conference on robot learning. PMLR, 2017, pp. 1–16.
- S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and service robotics. Springer, 2018, pp. 621–635.
- G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez, “The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 3234–3243.
- S. R. Richter, V. Vineet, S. Roth, and V. Koltun, “Playing for data: Ground truth from computer games,” in European conference on computer vision. Springer, 2016, pp. 102–118.
- S. R. Richter, Z. Hayder, and V. Koltun, “Playing for benchmarks,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2213–2222.
- V. Seib, B. Lange, and S. Wirtz, “Mixing real and synthetic data to enhance neural network training–a review of current approaches,” arXiv preprint arXiv:2007.08781, 2020.
- J. Tremblay, A. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: Bridging the reality gap by domain randomization,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 969–977.
- L. Perez and J. Wang, “The effectiveness of data augmentation in image classification using deep learning,” arXiv preprint arXiv:1712.04621, 2017.
- Z. Yang, Y. Chai, D. Anguelov, Y. Zhou, P. Sun, D. Erhan, S. Rafferty, and H. Kretzschmar, “Surfelgan: Synthesizing realistic sensor data for autonomous driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 118–11 127.
- T. Rothmeier and W. Huber, “Let it snow: On the synthesis of adverse weather image data,” in 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), 2021, pp. 3300–3306.
- F. Zhan, S. Lu, C. Zhang, F. Ma, and X. Xie, “Adversarial image composition with auxiliary illumination,” in Proceedings of the Asian Conference on Computer Vision, 2020.
- Y. Chen, F. Rong, S. Duggal, S. Wang, X. Yan, S. Manivasagam, S. Xue, E. Yumer, and R. Urtasun, “Geosim: Realistic video simulation via geometry-aware composition for self-driving,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 7230–7240.
- J. Wang, S. Manivasagam, Y. Chen, Z. Yang, I. A. Bârsan, A. J. Yang, W.-C. Ma, and R. Urtasun, “Cadsim: Robust and scalable in-the-wild 3d reconstruction for controllable sensor simulation,” in 6th Annual Conference on Robot Learning.
- MCity, “Home: MCity,” https://mcity.umich.edu/ , 2022.
- Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “YOLOX: exceeding YOLO series in 2021,” CoRR, vol. abs/2107.08430, 2021. [Online]. Available: https://arxiv.org/abs/2107.08430
- P. A. Lopez, M. Behrisch, L. Bieker-Walz, J. Erdmann, Y.-P. Flötteröd, R. Hilbrich, L. Lücken, J. Rummel, P. Wagner, and E. Wießner, “Microscopic traffic simulation using sumo,” in The 21st IEEE International Conference on Intelligent Transportation Systems. IEEE, 2018. [Online]. Available: https://elib.dlr.de/124092/
- OpenStreetMap contributors, “Planet dump retrieved from https://planet.osm.org ,” https://www.openstreetmap.org , 2017.
- A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al., “Shapenet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015.
- E. Marchand, H. Uchiyama, and F. Spindler, “Pose estimation for augmented reality: a hands-on survey,” IEEE transactions on visualization and computer graphics, vol. 22, no. 12, pp. 2633–2651, 2015.
- M. Matl, “Pyrender,” https://github.com/mmatl/pyrender, 2019.
- T. Park, A. A. Efros, R. Zhang, and J.-Y. Zhu, “Contrastive learning for unpaired image-to-image translation,” in European conference on computer vision. Springer, 2020, pp. 319–345.
- Y. Deng, D. Wang, G. Cao, B. Ma, X. Guan, Y. Wang, J. Liu, Y. Fang, and J. Li, “BAAI-VANJEE roadside dataset: Towards the connected automated vehicle highway technologies in challenging environments of china,” CoRR, vol. abs/2105.14370, 2021. [Online]. Available: https://arxiv.org/abs/2105.14370
- M. S. Lee, W. Shin, and S. W. Han, “Tracer: Extreme attention guided salient object tracing network (student abstract),” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 11, 2022, pp. 12 993–12 994.
- H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. Computer Vision Foundation / IEEE, 2020, pp. 11 618–11 628.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015. [Online]. Available: http://arxiv.org/abs/1412.6980
- T. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: common objects in context,” in Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, ser. Lecture Notes in Computer Science, D. J. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds., vol. 8693. Springer, 2014, pp. 740–755.
- A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
- H. Yu, Y. Luo, M. Shu, Y. Huo, Z. Yang, Y. Shi, Z. Guo, H. Li, X. Hu, J. Yuan, and Z. Nie, “DAIR-V2X: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection,” CoRR, vol. abs/2204.05575, 2022. [Online]. Available: https://doi.org/10.48550/arXiv.2204.05575
- “Smart Intersection Project,” https://sip.umtri.umich.edu/ , 2022.