Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

M2DGR: A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots (2112.13659v1)

Published 19 Dec 2021 in cs.RO and cs.CV

Abstract: We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera, an event camera, a Visual-Inertial Sensor (VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation system with real-time kinematic (RTK) signals. All those sensors were well-calibrated and synchronized, and their data were recorded simultaneously. The ground truth trajectories were obtained by the motion capture device, a laser 3D tracker, and an RTK receiver. The dataset comprises 36 sequences (about 1TB) captured in diverse scenarios including both indoor and outdoor environments. We evaluate state-of-the-art SLAM algorithms on M2DGR. Results show that existing solutions perform poorly in some scenarios. For the benefit of the research community, we make the dataset and tools public. The webpage of our project is https://github.com/SJTU-ViSYS/M2DGR.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jie Yin (47 papers)
  2. Ang Li (472 papers)
  3. Tao Li (441 papers)
  4. Wenxian Yu (36 papers)
  5. Danping Zou (23 papers)
Citations (135)

Summary

A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots: An Evaluation and Overview

The paper presents "M2DGR: A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots", a comprehensive dataset specifically designed to evaluate simultaneous localization and mapping (SLAM) algorithms for ground robotics. This dataset strives to address the limitations of existing datasets tailored mainly for aerial or autonomous vehicle contexts, aiming to enhance SLAM solutions' development and assessment, particularly for ground-based robots that navigate challenging environments.

Dataset Composition and Innovations

The dataset is distinguished by its comprehensive range of sensory data and diverse scenario coverage, emphasizing multiple indoor and outdoor environments seamlessly integrated. It incorporates several sensors such as fish-eye, sky-pointing RGB, and an array of sophisticated tracking devices including LiDAR, infrared, and event cameras. These sensors are well-calibrated and synchronized, with the data captured concurrently to establish an extensive benchmark for evaluating SLAM performance under realistic conditions.

A significant component of the dataset is the environmental diversity it encompasses. It includes 36 sequences with varying scenarios, such as entering lifts, transitioning between indoor and outdoor environments, and navigating corridors, which are frequently encountered in practical applications. The ground truth for these trajectories was meticulously gathered using multiple tracking systems, ensuring reliable evaluation metrics.

Evaluation of SLAM Algorithms

The paper evaluates several state-of-the-art SLAM algorithms on the M2DGR dataset, examining both LiDAR-based and vision-based approaches. Notably, the dataset reveals specific scenarios where these algorithms perform inadequately, despite their proven capabilities in prior benchmarks. For instance, visual SLAM methods are challenged by low illumination environments, where thermal-infrared cameras show an advantage. Additionally, handling transitions such as elevator movements remains problematic, underscoring the need for robustness in dynamic and vertically changing environments.

LiDAR-based methods generally exhibited superior performance compared to vision-based ones, especially in extensive outdoor scenarios, but still struggled with abrupt dynamic transitions and indoor-to-outdoor changes. The insights gathered from these evaluations suggest that despite recent advancements, SLAM systems require further refinement to handle the complexities of real-world navigation for ground robots.

Implications and Future Directions

The introduction of the M2DGR dataset holds considerable promise for SLAM research by providing a more representative and challenging testbed. It encourages advancements in multi-sensor fusion technologies, highlighting the necessity for integrating diverse sensory inputs to mitigate individual sensor limitations.

A key implication of this work is the identification of unexplored research directions such as improving robustness to dynamic motions and seamless handling of vertical shifts (e.g., using elevators). The dataset also offers a platform to explore the integration of new sensory modalities, such as event cameras and thermal imaging, to enhance SLAM systems' reliability across varied operational environments.

The authors intend to extend and update the dataset periodically, aiming to establish a benchmark comparable to prominent datasets in related fields. This ongoing effort will ensure that the research community can continue to utilize a relevant and challenging dataset, adapting to advancements in SLAM technology and evolving application demands.

In conclusion, the M2DGR dataset provides a substantial contribution to the field of robotic vision and navigation, setting a new standard for evaluating SLAM systems in ground robotics. With its extensive range of scenarios and sensory data, it is poised to drive future research and innovation in robust autonomous navigation technologies.