FusionPortableV2: A Unified Multi-Sensor Dataset for Generalized SLAM Across Diverse Platforms and Scalable Environments (2404.08563v2)
Abstract: Simultaneous Localization and Mapping (SLAM) technology has been widely applied in various robotic scenarios, from rescue operations to autonomous driving. However, the generalization of SLAM algorithms remains a significant challenge, as current datasets often lack scalability in terms of platforms and environments. To address this limitation, we present FusionPortableV2, a multi-sensor SLAM dataset featuring sensor diversity, varied motion patterns, and a wide range of environmental scenarios. Our dataset comprises $27$ sequences, spanning over $2.5$ hours and collected from four distinct platforms: a handheld suite, a legged robots, a unmanned ground vehicle (UGV), and a vehicle. These sequences cover diverse settings, including buildings, campuses, and urban areas, with a total length of $38.7km$. Additionally, the dataset includes ground-truth (GT) trajectories and RGB point cloud maps covering approximately $0.3km2$. To validate the utility of our dataset in advancing SLAM research, we assess several state-of-the-art (SOTA) SLAM algorithms. Furthermore, we demonstrate the dataset's broad application beyond traditional SLAM tasks by investigating its potential for monocular depth estimation. The complete dataset, including sensor data, GT, and calibration details, is accessible at https://fusionportable.github.io/dataset/fusionportable_v2.
- The International Journal of Robotics Research 39(12): 1367–1376.
- In: 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 6433–6438.
- arXiv preprint arXiv:2307.15818 .
- arXiv preprint arXiv:2212.06817 .
- The International Journal of Robotics Research 42(1-2): 33–42.
- The International Journal of Robotics Research 35(10): 1157–1163.
- Carlevaris-Bianco N, Ushani AK and Eustice RM (2016) University of michigan north campus long-term vision and lidar dataset. The International Journal of Robotics Research 35(9): 1023–1035.
- In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. pp. 4015–4022.
- In: 2019 International Conference on Robotics and Automation (ICRA). IEEE, pp. 6713–6719.
- Eigen D, Puhrsch C and Fergus R (2014) Depth map prediction from a single image using a multi-scale deep network. Advances in neural information processing systems 27.
- Furgale P, Rehder J and Siegwart R (2013) Unified temporal and spatial calibration for multi-sensor systems. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, pp. 1280–1286.
- In: Field and Service Robotics. Springer, pp. 145–159.
- arXiv preprint arXiv:2403.02845 .
- IEEE Robotics and Automation Letters 7(3): 8217–8224.
- IEEE Robotics and Automation Letters 6(3): 4947–4954.
- The International Journal of Robotics Research 32(11): 1231–1237.
- IEEE/ASME Transactions on Mechatronics .
- The International Journal of Robotics Research 29(13): 1595–1601.
- The International Journal of Robotics Research 38(6): 642–657.
- In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 3851–3856.
- arXiv preprint arXiv:2403.12945 .
- In: 2023 IEEE international conference on robotics and automation (ICRA). IEEE, pp. 11322–11328.
- Lai H, Yin P and Scherer S (2022) Adafusion: Visual-lidar fusion with adaptive weights for place recognition. IEEE Robotics and Automation Letters 7(4): 12038–12045.
- The International Journal of Robotics Research : 02783649241227968.
- In: 2022 International Conference on Robotics and Automation (ICRA). IEEE, pp. 10672–10678.
- IEEE Robotics & Automation Magazine 28(1): 48–58.
- IEEE Transactions on Automation Science and Engineering : 1–1110.1109/TASE.2023.3290348.
- ieee transactions on robotics 32(1): 1–19.
- The International Journal of Robotics Research 36(1): 3–15.
- Majdik AL, Till C and Scaramuzza D (2017) The zurich urban micro aerial vehicle dataset. The International Journal of Robotics Research 36(3): 269–273.
- arXiv preprint arXiv:2312.06741 .
- In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 4460–4470.
- Miller M, Chung SJ and Hutchinson S (2018) The visual–inertial canoe dataset. The International Journal of Robotics Research 37(1): 13–20.
- The International Journal of Robotics Research 36(2): 142–149.
- The International Journal of Robotics Research 41(3): 270–280.
- arXiv preprint arXiv:2304.07193 .
- arXiv preprint arXiv:2310.08864 .
- In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 5331–5338.
- arXiv preprint arXiv:2401.09101 .
- In: 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 3847–3854.
- The International Journal of Robotics Research 38(6): 633–641.
- Qin T, Li P and Shen S (2018) Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics 34(4): 1004–1020.
- In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 4353–4360.
- In: 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 4304–4311.
- IEEE Robotics and Automation Letters 7(4): 9043–9050.
- In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 1680–1687.
- In: 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 7226–7233.
- arXiv preprint arXiv:2306.14846 .
- Advances in neural information processing systems 34: 16558–16569.
- In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 4909–4916.
- IEEE Transactions on Robotics 39(1): 309–326.
- IEEE Transactions on Robotics 38(4): 2053–2073.
- In: 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 4193–4199.
- IEEE Robotics and Automation Letters 7(2): 2266–2273.
- arXiv preprint arXiv:2205.10737 .
- IEEE Robotics and Automation Letters 8(1): 408–415.
- Zhang Z (2000) A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence 22(11): 1330–1334.
- IEEE Robotics and Automation Letters 3(3): 2032–2039.
- The International Journal of Robotics Research 39(9): 1052–1060.
- Hexiang Wei (6 papers)
- Jianhao Jiao (41 papers)
- Xiangcheng Hu (12 papers)
- Jingwen Yu (6 papers)
- Xupeng Xie (5 papers)
- Jin Wu (59 papers)
- Yilong Zhu (16 papers)
- Yuxuan Liu (96 papers)
- Lujia Wang (40 papers)
- Ming Liu (421 papers)