Seamless Virtual Reality with Integrated Synchronizer and Synthesizer for Autonomous Driving (2403.03541v1)
Abstract: Virtual reality (VR) is a promising data engine for autonomous driving (AD). However, data fidelity in this paradigm is often degraded by VR inconsistency, for which the existing VR approaches become ineffective, as they ignore the inter-dependency between low-level VR synchronizer designs (i.e., data collector) and high-level VR synthesizer designs (i.e., data processor). This paper presents a seamless virtual reality SVR platform for AD, which mitigates such inconsistency, enabling VR agents to interact with each other in a shared symbiotic world. The crux to SVR is an integrated synchronizer and synthesizer IS2 design, which consists of a drift-aware lidar-inertial synchronizer for VR colocation and a motion-aware deep visual synthesis network for augmented reality image generation. We implement SVR on car-like robots in two sandbox platforms, achieving a cm-level VR colocalization accuracy and 3.2% VR image deviation, thereby avoiding missed collisions or model clippings. Experiments show that the proposed SVR reduces the intervention times, missed turns, and failure rates compared to other benchmarks. The SVR-trained neural network can handle unseen situations in real-world environments, by leveraging its knowledge learnt from the VR space.
- S. Feng, X. Yan, H. Sun, Y. Feng, and H. X. Liu, “Intelligent driving intelligence test for autonomous vehicles with naturalistic and adversarial environment,” Nature Communications, vol. 12, no. 748, 2021.
- Tesla, “AI & Robotics,” https://www.tesla.com/AI, 2023.
- NVIDIA, “NVIDIA DRIVE end-to-end solutions for autonomous vehicles,” urlhttps://developer.nvidia.com/drive, 2023.
- A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “Carla: An open urban driving simulator,” in Conference on Robot Learning, 2017, pp. 1–16.
- S. Wang, C. Li, D. W. K. Ng, Y. C. Eldar, H. V. Poor, Q. Hao, and C. Xu, “Federated deep learning meets autonomous vehicle perception: Design and verification,” IEEE Network, vol. 37, no. 3, pp. 16–25, 2023.
- Y. Li, D. Ma, Z. An, Z. Wang, Y. Zhong, S. Chen, and C. Feng, “V2X-Sim: Multi-agent collaborative perception dataset and benchmark for autonomous driving,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10 914–10 921, 2022.
- W. Li, C. Pan, R. Zhang, J. Ren, Y. Ma, J. Fang, F. Yan, Q. Geng, X. Huang, H. Gong et al., “AADS: Augmented autonomous driving simulation using data-driven algorithms,” Science Robotics, vol. 4, no. 28, 2019.
- M.-Y. Liu, T. Breuel, and J. Kautz, “Unsupervised image-to-image translation networks,” in Advances in Neural Information Processing Systems, vol. 30, 2017.
- Z. Yang, Y. Chai, D. Anguelov, Y. Zhou, P. Sun, D. Erhan, S. Rafferty, and H. Kretzschmar, “SurfelGAN: Synthesizing realistic sensor data for autonomous driving,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 118–11 127.
- S. Feng, H. Sun, X. Yan, H. Zhu, Z. Zou, S. Shen, and H. X. Liu, “Dense reinforcement learning for safety validation of autonomous vehicles,” Nature, vol. 615, no. 7953, pp. 620–627, 2023.
- S. Feng, Y. Feng, X. Yan, S. Shen, S. Xu, and H. X. Liu, “Safety assessment of highly automated driving systems in test tracks: A new framework,” Accident Analysis & Prevention, vol. 144, p. 105664, 2020.
- Z. Zheng, X. Han, X. Xia, L. Gao, H. Xiang, and J. Ma, “OpenCDA-ROS: Enabling seamless integration of simulation and real-world cooperative driving automation,” IEEE Transactions on Intelligent Vehicles, 2023.
- Intel, “ROS/ROS2 bridge for CARLA simulator,” https://github.com/carla-simulator/ros-bridge, 2023.
- J. Delmerico, R. Poranne, F. Bogo, H. Oleynikova, E. Vollenweider, S. Coros, J. Nieto, and M. Pollefeys, “Spatial computing and intuitive interaction: Bringing mixed reality and robotics together,” IEEE Robotics and Automation Magazine, vol. 29, no. 1, pp. 45–57, 2022.
- C. Li and et al., “iGibson 2.0: Object-centric simulation for robot learning of everyday household tasks,” in Conference on Robot Learning, 2022, pp. 455–465.
- W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast direct lidar-inertial odometry,” IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2053–2073, 2022.
- M. Bloesch, M. Burri, S. Omari, M. Hutter, and R. Siegwart, “Iterated extended kalman filter based visual-inertial odometry using direct photometric feedback,” The International Journal of Robotics Research, vol. 36, no. 10, p. 1053–1072, 2017.
- J. Zhang and S. Singh, “LOAM: Lidar odometry and mapping in real-time.” in Robotics: Science and Systems, vol. 2, no. 9. Berkeley, CA, 2014, pp. 1–9.
- I. Rocco, M. Cimpoi, R. Arandjelović, A. Torii, T. Pajdla, and J. Sivic, “Neighbourhood consensus networks,” in Advances in Neural Information Processing Systems, vol. 31, 2018.
- R. Han, S. Wang, S. Wang, Z. Zhang, Q. Zhang, Y. C. Eldar, Q. Hao, and J. Pan, “RDA: An accelerated collision-free motion planner for autonomous navigation in cluttered environments,” IEEE Robotics and Automation Letters, no. 3, pp. 1715–1722, Mar. 2023.
- A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollár, and R. Girshick, “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
- D. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2012.
- P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “KAZE features,” in European Conference on Computer Vision. Springer, 2012, pp. 214–227.
- S. Manivasagam, S. Wang, K. Wong, W. Zeng, M. Sazanovich, S. Tan, B. Yang, W.-C. Ma, and R. Urtasun, “LidarSIM: Realistic lidar simulation by leveraging the real world,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, Mar. 2020, pp. 11 167–11 176.
- F. Codevilla, M. Müller, A. López, V. Koltun, and A. Dosovitskiy, “End-to-end driving via conditional imitation learning,” in IEEE International Conference on Robotics and Automation, 2018, pp. 4693–4700.
- R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” in IEEE International Conference on Computer Vision, 2017, pp. 618–626.
- J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, Sep. 2017, pp. 23–30.
- He Li (88 papers)
- Ruihua Han (17 papers)
- Zirui Zhao (18 papers)
- Wei Xu (536 papers)
- Qi Hao (53 papers)
- Shuai Wang (466 papers)
- Chengzhong Xu (98 papers)