Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots (2307.03890v1)
Abstract: High-quality datasets can speed up breakthroughs and reveal potential developing directions in SLAM research. To support the research on corner cases of visual SLAM systems, this paper presents Ground-Challenge: a challenging dataset comprising 36 trajectories with diverse corner cases such as aggressive motion, severe occlusion, changing illumination, few textures, pure rotation, motion blur, wheel suspension, etc. The dataset was collected by a ground robot with multiple sensors including an RGB-D camera, an inertial measurement unit (IMU), a wheel odometer and a 3D LiDAR. All of these sensors were well-calibrated and synchronized, and their data were recorded simultaneously. To evaluate the performance of cutting-edge SLAM systems, we tested them on our dataset and demonstrated that these systems are prone to drift and fail on specific sequences. We will release the full dataset and relevant materials upon paper publication to benefit the research community. For more information, visit our project website at https://github.com/sjtuyinjie/Ground-Challenge.
- C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Transactions on robotics, vol. 32, no. 6, pp. 1309–1332, 2016.
- J. Yin, A. Li, T. Li, W. Yu, and D. Zou, “M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2266–2273, 2021.
- J. A. Hesch, D. G. Kottas, S. L. Bowman, and S. I. Roumeliotis, “Consistency analysis and improvement of vision-aided inertial navigation,” IEEE Transactions on Robotics, vol. 30, no. 1, pp. 158–176, 2013.
- K. J. Wu, C. X. Guo, G. Georgiou, and S. I. Roumeliotis, “Vins on wheels,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 5155–5162.
- T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
- D. Yang, S. Bi, W. Wang, C. Yuan, W. Wang, X. Qi, and Y. Cai, “Dre-slam: Dynamic rgb-d encoder slam for a differential-drive robot,” Remote Sensing, vol. 11, no. 4, p. 380, 2019.
- T. Zhuang, “Viw-fusion,” https://github.com/TouchDeeper/VIW-Fusion, 2022.
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248–255.
- J.-A. Bolte, A. Bar, D. Lipinski, and T. Fingscheidt, “Towards corner case detection for autonomous driving,” in 2019 IEEE Intelligent vehicles symposium (IV). IEEE, 2019, pp. 438–445.
- K. Muhammad, A. Ullah, J. Lloret, J. Del Ser, and V. H. C. de Albuquerque, “Deep learning for safe autonomous driving: Current challenges and future directions,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 7, pp. 4316–4336, 2020.
- M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The euroc micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
- A. Ligocki, A. Jelinek, and L. Zalud, “Brno urban dataset-the new data for self-driving agents and mapping tasks,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 3284–3290.
- A. Bonarini, W. Burgard, G. Fontana, M. Matteucci, D. G. Sorrenti, and J. D. Tardos, “Rawseeds: Robotics advancement through web-publishing of sensorial and elaborated extensive data sets,” in In proceedings of IROS, vol. 6, 2006, p. 93.
- K. Y. Leung, Y. Halpern, T. D. Barfoot, and H. H. Liu, “The utias multi-robot cooperative localization and mapping dataset,” The International Journal of Robotics Research, vol. 30, no. 8, pp. 969–974, 2011.
- T. Pire, M. Mujica, J. Civera, and E. Kofman, “The rosario dataset: Multisensor data for localization and mapping in agricultural environments,” The International Journal of Robotics Research, vol. 38, no. 6, pp. 633–641, 2019.
- X. Shi, D. Li, P. Zhao, Q. Tian, Y. Tian, Q. Long, C. Zhu, J. Song, F. Qiao, L. Song et al., “Are we ready for service robots? the openloris-scene datasets for lifelong slam,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 3139–3145.
- J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of rgb-d slam systems,” in 2012 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2012, pp. 573–580.
- N. Carlevaris-Bianco, A. K. Ushani, and R. M. Eustice, “University of michigan north campus long-term vision and lidar dataset,” The International Journal of Robotics Research, vol. 35, no. 9, pp. 1023–1035, 2016.
- J. G. Rogers, J. M. Gregory, J. Fink, and E. Stump, “Test your slam! the subt-tunnel dataset and metric for mapping,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 955–961.
- D. Bogdoll, J. Breitenstein, F. Heidecker, M. Bieshaar, B. Sick, T. Fingscheidt, and M. Zöllner, “Description of corner cases in automated driving: Goals and challenges,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1023–1028.
- J. Liu, X. Li, Y. Liu, and H. Chen, “Rgb-d inertial odometry for a resource-restricted robot in dynamic environments,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 9573–9580, 2022.
- C. Yu, Z. Liu, X.-J. Liu, F. Xie, Y. Yang, Q. Wei, and Q. Fei, “Ds-slam: A semantic visual slam towards dynamic environments,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 1168–1174.
- S. Cao, X. Lu, and S. Shen, “Gvins: Tightly coupled gnss–visual–inertial fusion for smooth and consistent state estimation,” IEEE Transactions on Robotics, 2022.
- C. Zheng, Q. Zhu, W. Xu, X. Liu, Q. Guo, and F. Zhang, “Fast-livo: Fast and tightly-coupled sparse-direct lidar-inertial-visual odometry,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022, pp. 4003–4009.
- W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “Fast-lio2: Fast direct lidar-inertial odometry,” IEEE Transactions on Robotics, 2022.
- Z. Shan, R. Li, and S. Schwertfeger, “Rgbd-inertial trajectory estimation and mapping for ground robots,” Sensors, vol. 19, no. 10, p. 2251, 2019.
- M. I. Ribeiro, “Kalman and extended kalman filters: Concept, derivation and properties,” Institute for Systems and Robotics, vol. 43, p. 46, 2004.
- O. J. Woodman, “An introduction to inertial navigation,” University of Cambridge, Computer Laboratory, Tech. Rep. UCAM-CL-TR-696, Aug. 2007.
- P. Furgale, J. Rehder, and R. Siegwart, “Unified temporal and spatial calibration for multi-sensor systems,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2013, pp. 1280–1286.
- C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE Transactions on Robotics, 2021.
- MichaelGrupp, “evo,” https://github.com/MichaelGrupp/evo, 2018.
- Jie Yin (47 papers)
- Hao Yin (66 papers)
- Conghui Liang (1 paper)
- Zhengyou Zhang (21 papers)