Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Challenges of Indoor SLAM: A multi-modal multi-floor dataset for SLAM evaluation (2306.08522v1)

Published 14 Jun 2023 in cs.RO

Abstract: Robustness in Simultaneous Localization and Mapping (SLAM) remains one of the key challenges for the real-world deployment of autonomous systems. SLAM research has seen significant progress in the last two and a half decades, yet many state-of-the-art (SOTA) algorithms still struggle to perform reliably in real-world environments. There is a general consensus in the research community that we need challenging real-world scenarios which bring out different failure modes in sensing modalities. In this paper, we present a novel multi-modal indoor SLAM dataset covering challenging common scenarios that a robot will encounter and should be robust to. Our data was collected with a mobile robotics platform across multiple floors at Northeastern University's ISEC building. Such a multi-floor sequence is typical of commercial office spaces characterized by symmetry across floors and, thus, is prone to perceptual aliasing due to similar floor layouts. The sensor suite comprises seven global shutter cameras, a high-grade MEMS inertial measurement unit (IMU), a ZED stereo camera, and a 128-channel high-resolution lidar. Along with the dataset, we benchmark several SLAM algorithms and highlight the problems faced during the runs, such as perceptual aliasing, visual degradation, and trajectory drift. The benchmarking results indicate that parts of the dataset work well with some algorithms, while other data sections are challenging for even the best SOTA algorithms. The dataset is available at https://github.com/neufieldrobotics/NUFR-M3F.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309–1332, 2016.
  2. “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
  3. “Evaluating egomotion and structure-from-motion approaches using the TUM RGB-D benchmark,” in Proc. of the Workshop on Color-Depth Camera Fusion in Robotics at the IEEE/RJS International Conference on Intelligent Robot Systems (IROS), 2012, vol. 13.
  4. “A photometrically calibrated benchmark for monocular visual odometry,” arXiv preprint arXiv:1607.02555, 2016.
  5. “The EuRoC micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
  6. “Balancing the budget: Feature selection and tracking for multi-camera visual-inertial odometry,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1182–1189, 2021.
  7. “Camera, lidar and multi-modal slam systems for autonomous ground vehicles: a survey,” Journal of Intelligent & Robotic Systems, vol. 105, no. 1, pp. 2, 2022.
  8. “The newer college dataset: Handheld lidar, inertial and vision with ground truth,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 4353–4360.
  9. “The hilti slam challenge dataset,” IEEE Robotics and Automation Letters, 2022.
  10. “Rellis-3d dataset: Data, benchmarks and analysis,” in 2021 IEEE international conference on robotics and automation (ICRA). IEEE, 2021, pp. 1110–1116.
  11. “1 year, 1000 km: The Oxford RobotCar dataset,” The International Journal of Robotics Research, vol. 36, no. 1, pp. 3–15, 2017.
  12. “Woodscape: A multi-task, multi-camera fisheye dataset for autonomous driving,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 9308–9318.
  13. “Ford multi-AV seasonal dataset,” The International Journal of Robotics Research, vol. 39, no. 12, pp. 1367–1376, 2020.
  14. “The tum vi benchmark for evaluating visual-inertial odometry,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 1680–1687.
  15. “Penncosyvio: A challenging visual inertial odometry benchmark,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 3847–3854.
  16. “Hilti-oxford dataset: A millimeter-accurate benchmark for simultaneous localization and mapping,” IEEE Robotics and Automation Letters, vol. 8, no. 1, pp. 408–415, 2022.
  17. “Unified temporal and spatial calibration for multi-sensor systems,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013, pp. 1280–1286.
  18. “Lidar-camera calibration using 3d-3d point correspondences,” ArXiv, vol. abs/1705.09785, 2017.
  19. “Automatic extrinsic calibration of a camera and a 3d lidar using line and plane correspondences,” 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5562–5569, 2018.
  20. “AprilTag 2: Efficient and robust fiducial detection,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016.
  21. Multiple view geometry in computer vision, Cambridge university press, 2003.
  22. “ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874–1890, 2021.
  23. “SVO: Semidirect visual odometry for monocular and multicamera systems,” IEEE Trans. Robot., vol. 33, no. 2, pp. 249–265, 2017.
  24. “Online temporal calibration for monocular visual-inertial systems,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 3662–3669.
  25. “Visual-inertial mapping with non-linear factor recovery,” IEEE Robotics and Automation Letters (RA-L) & Int. Conference on Intelligent Robotics and Automation (ICRA), vol. 5, no. 2, pp. 422–429, 2020.
  26. “DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras,” Advances in neural information processing systems, 2021.
  27. “Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 4758–4765.
  28. “Bags of binary words for fast place recognition in image sequences,” IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1188–1197, October 2012.
  29. “High fidelity inertial measurement unit (imu) modeling for underwater visual inertial navigation,” in OCEANS 2021: San Diego–Porto. IEEE, 2021, pp. 1–8.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Pushyami Kaveti (11 papers)
  2. Aniket Gupta (9 papers)
  3. Dennis Giaya (2 papers)
  4. Madeline Karp (1 paper)
  5. Colin Keil (5 papers)
  6. Jagatpreet Nir (1 paper)
  7. Hanumant Singh (20 papers)
  8. ZhiYong Zhang (68 papers)
Citations (3)