CARLA-Loc: Synthetic SLAM Dataset with Full-stack Sensor Setup in Challenging Weather and Dynamic Environments (2309.08909v2)
Abstract: The robustness of SLAM (Simultaneous Localization and Mapping) algorithms under challenging environmental conditions is critical for the success of autonomous driving. However, the real-world impact of such conditions remains largely unexplored due to the difficulty of altering environmental parameters in a controlled manner. To address this, we introduce CARLA-Loc, a synthetic dataset designed for challenging and dynamic environments, created using the CARLA simulator. Our dataset integrates a variety of sensors, including cameras, event cameras, LiDAR, radar, and IMU, etc. with tuned parameters and modifications to ensure the realism of the generated data. CARLA-Loc comprises 7 maps and 42 sequences, each varying in dynamics and weather conditions. Additionally, a pipeline script is provided that allows users to generate custom sequences conveniently. We evaluated 5 visual-based and 4 LiDAR-based SLAM algorithms across different sequences, analyzing how various challenging environmental factors influence localization accuracy. Our findings demonstrate the utility of the CARLA-Loc dataset in validating the efficacy of SLAM algorithms under diverse conditions.
- M. Bujanca, X. Shi, M. Spear, P. Zhao, B. Lennox, and M. Luján, “Robust slam systems: Are we there yet?,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5320–5327, 2021.
- M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, p. 381–395, jun 1981.
- S. Pagad, D. Agarwal, S. Narayanan, K. Rangan, H. Kim, and G. Yalla, “Robust method for removing dynamic objects from point clouds,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 10765–10771, 2020.
- M. R. U. Saputra, A. Markham, and N. Trigoni, “Visual slam and structure from motion in dynamic environments: A survey,” ACM Comput. Surv., vol. 51, feb 2018.
- C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-manifold preintegration for real-time visual–inertial odometry,” IEEE Transactions on Robotics, vol. 33, no. 1, pp. 1–21, 2017.
- C. Zhang, Z. Huang, B. X. Lin Tung, M. H. Ang, and D. Rus, “Smartrainnet: Uncertainty estimation for laser measurement in rain,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 10567–10573, 2023.
- A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An open urban driving simulator,” in Proceedings of the 1st Annual Conference on Robot Learning (S. Levine, V. Vanhoucke, and K. Goldberg, eds.), vol. 78 of Proceedings of Machine Learning Research, pp. 1–16, PMLR, 13–15 Nov 2017.
- B. Bescos, J. M. Fácil, J. Civera, and J. Neira, “Dynaslam: Tracking, mapping, and inpainting in dynamic scenes,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 4076–4083, 2018.
- K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2961–2969, 2017.
- R. Mur-Artal and J. D. Tardós, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, 2017.
- B. Bescos, C. Campos, J. D. Tardós, and J. Neira, “Dynaslam ii: Tightly-coupled multi-object tracking and slam,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5191–5198, 2021.
- C. Yu, Z. Liu, X.-J. Liu, F. Xie, Y. Yang, Q. Wei, and Q. Fei, “Ds-slam: A semantic visual slam towards dynamic environments,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1168–1174, 2018.
- I. Ballester, A. Fontán, J. Civera, K. H. Strobl, and R. Triebel, “Dot: Dynamic object tracking for visual slam,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 11705–11711, 2021.
- J. Liu, X. Li, Y. Liu, and H. Chen, “Rgb-d inertial odometry for a resource-restricted robot in dynamic environments,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 9573–9580, 2022.
- S. Song, H. Lim, A. J. Lee, and H. Myung, “Dynavins: A visual-inertial slam for dynamic environments,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 11523–11530, 2022.
- X. Chen, A. Milioto, E. Palazzolo, P. Giguère, J. Behley, and C. Stachniss, “Suma++: Efficient lidar-based semantic slam,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4530–4537, 2019.
- W. Liu, W. Sun, and Y. Liu, “Dloam: Real-time and robust lidar slam system based on cnn in dynamic urban environments,” IEEE Open Journal of Intelligent Transportation Systems, pp. 1–1, 2021.
- W. Wang, X. You, X. Zhang, L. Chen, L. Zhang, and X. Liu, “Lidar-based slam under semantic constraints in dynamic environments,” Remote Sensing, vol. 13, no. 18, 2021.
- Y. Chen, S. Sun, H. Yin, and M. H. Ang, “Exploring the effect of 3d object removal using deep learning for lidar-based mapping and long-term vehicular localization,” in 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), pp. 1730–1735, 2022.
- A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “Pointpillars: Fast encoders for object detection from point clouds,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12697–12705, 2019.
- “Rf-lio: Removal-first tightly-coupled lidar inertial odometry in high dynamic environments,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4421–4428, 2021.
- A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
- J.-L. Blanco-Claraco, F. absent\sqrt{}square-root start_ARG end_ARGÅngel Moreno-Dueabsent\sqrt{}square-root start_ARG end_ARG±as, and J. Gonzabsent\sqrt{}square-root start_ARG end_ARG°lez-Jimabsent\sqrt{}square-root start_ARG end_ARG©nez, “The mabsent\sqrt{}square-root start_ARG end_ARG°laga urban dataset: High-rate stereo and lidar in a realistic urban scenario,” The International Journal of Robotics Research, vol. 33, no. 2, pp. 207–214, 2014.
- N. Carlevaris-Bianco, A. K. Ushani, and R. M. Eustice, “University of michigan north campus long-term vision and lidar dataset,” The International Journal of Robotics Research, vol. 35, no. 9, pp. 1023–1035, 2016.
- M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The euroc micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
- A. L. Majdik, C. Till, and D. Scaramuzza, “The zurich urban micro aerial vehicle dataset,” The International Journal of Robotics Research, vol. 36, no. 3, pp. 269–273, 2017.
- D. Schubert, T. Goll, N. Demmel, V. Usenko, J. Stückler, and D. Cremers, “The tum vi benchmark for evaluating visual-inertial odometry,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1680–1687, 2018.
- W. Wen, Y. Zhou, G. Zhang, S. Fahandezh-Saadi, X. Bai, W. Zhan, M. Tomizuka, and L.-T. Hsu, “Urbanloco: A full sensor suite dataset for mapping and localization in urban scenes,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 2310–2316, 2020.
- K. Minoda, F. Schilling, V. Wüest, D. Floreano, and T. Yairi, “Viode: A simulated dataset to address the challenges of visual-inertial odometry in dynamic environments,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1343–1350, 2021.
- S. Klenk, J. Chui, N. Demmel, and D. Cremers, “Tum-vie: The tum stereo visual-inertial event dataset,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8601–8608, 2021.
- A. Soliman, F. Bonardi, D. Sidibé, and S. Bouchafa, “Ibiscape: A simulated benchmark for multi-modal slam systems evaluation in large-scale dynamic environments,” Journal of Intelligent & Robotic Systems, vol. 106, no. 3, p. 53, 2022.
- S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and Service Robotics (M. Hutter and R. Siegwart, eds.), (Cham), pp. 621–635, Springer International Publishing, 2018.
- J. Lambert, A. Carballo, A. M. Cano, P. Narksri, D. Wong, E. Takeuchi, and K. Takeda, “Performance analysis of 10 models of 3d lidars for automated driving,” IEEE Access, vol. 8, pp. 131699–131722, 2020.
- C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. M. Montiel, and J. D. Tardós, “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874–1890, 2021.
- T. Qin, J. Pan, S. Cao, and S. Shen, “A general optimization-based framework for local odometry estimation with multiple sensors,” 2019.
- K. Sun, K. Mohta, B. Pfrommer, M. Watterson, S. Liu, Y. Mulgaonkar, C. J. Taylor, and V. Kumar, “Robust stereo visual inertial odometry for fast autonomous flight,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 965–972, 2018.
- J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in real-time.,”
- T. Shan and B. Englot, “Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758–4765, 2018.
- W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “Fast-lio2: Fast direct lidar-inertial odometry,” IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2053–2073, 2022.
- Y. Park and S. Bae, “Keeping less is more: Point sparsification for visual slam,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7936–7943, 2022.
- Yuhang Han (8 papers)
- Zhengtao Liu (6 papers)
- Shuo Sun (91 papers)
- Dongen Li (4 papers)
- Jiawei Sun (34 papers)
- Chengran Yuan (11 papers)
- Marcelo H. Ang Jr (45 papers)