Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Modal Lidar Dataset for Benchmarking General-Purpose Localization and Mapping Algorithms (2203.03454v1)

Published 7 Mar 2022 in cs.RO

Abstract: Lidar technology has evolved significantly over the last decade, with higher resolution, better accuracy, and lower cost devices available today. In addition, new scanning modalities and novel sensor technologies have emerged in recent years. Public datasets have enabled benchmarking of algorithms and have set standards for the cutting edge technology. However, existing datasets are not representative of the technological landscape, with only a reduced number of lidars available. This inherently limits the development and comparison of general-purpose algorithms in the evolving landscape. This paper presents a novel multi-modal lidar dataset with sensors showcasing different scanning modalities (spinning and solid-state), sensing technologies, and lidar cameras. The focus of the dataset is on low-drift odometry, with ground truth data available in both indoors and outdoors environment with sub-millimeter accuracy from a motion capture (MOCAP) system. For comparison in longer distances, we also include data recorded in larger spaces indoors and outdoors. The dataset contains point cloud data from spinning lidars and solid-state lidars. Also, it provides range images from high resolution spinning lidars, RGB and depth images from a lidar camera, and inertial data from built-in IMUs. This is, to the best of our knowledge, the lidar dataset with the most variety of sensors and environments where ground truth data is available. This dataset can be widely used in multiple research areas, such as 3D LiDAR simultaneous localization and mapping (SLAM), performance comparison between multi-modal lidars, appearance recognition and loop closure detection. The datasets are available at: https://github.com/TIERS/tiers-lidars-dataset.

A Novel Multi-Modal Lidar Dataset for Advanced Localization and Mapping Benchmarking

The paper presents a significant contribution to the field of autonomous driving and robotic systems through the introduction of a diverse, multi-modal lidar dataset that aims to facilitate the benchmarking of general-purpose localization and mapping algorithms. The authors emphasize the limitations of existing datasets, which often feature a narrow range of lidar technologies and configurations. This limited scope restricts the development and evaluation of algorithms in an evolving technological landscape. In response, the authors have compiled a dataset featuring a variety of sensors and environments, enabling more comprehensive research and development in simultaneous localization and mapping (SLAM) domains.

Dataset Composition and Significance

The dataset's haLLMark is its inclusion of multiple types of lidar sensors, including spinning lidars with 16, 64, and 128 channels, as well as two different solid-state lidars with distinct scanning patterns and fields of view. By providing this variety, the dataset offers an unprecedented opportunity to compare the performance of general-purpose algorithms across different sensing modalities. Furthermore, it includes data from RGB and depth images from a lidar camera, enhancing its utility for diverse research tasks, such as loop closure detection and object recognition.

With ground truth data available at sub-millimeter accuracy in both indoor and forest environments, the dataset is poised to enable researchers to analyze low-drift odometry and mapping algorithms with high precision. By incorporating both indoor and outdoor data, the dataset allows for the exploration of algorithmic performance across different environmental conditions—ranging from structured urban settings to unstructured forest environments.

Technical Contributions and Baseline Analysis

The paper delineates the dataset's construction and the variety of environments and sensor modalities it covers. This diversity positions it as a comprehensive resource for evaluating lidar-based SLAM systems. The authors have also performed a baseline analysis using state-of-the-art SLAM algorithms, providing an initial comparison of their performance across differing environments and sensor types. This analysis highlights the superior performance of spinning lidars in structured settings, while in unstructured, forest environments, solid-state lidars, particularly when leveraged with tightly coupled SLAM algorithms, provide competitive results.

Implications for Future Research

The introduction of this multi-modal dataset holds several implications for the future of SLAM research. From a practical standpoint, it lays the groundwork for developing more intricate and robust localization and mapping techniques capable of handling both structured and unstructured environments. The dataset's extensive sensor variety also suggests applications in enhancing fusion algorithms that capitalize on the complementary strengths of different lidar technologies.

On a theoretical level, the diversity of the dataset may stimulate the development of sensor-agnostic algorithms that can be generalized across various platforms. This possibility aligns with ongoing efforts in the research community toward achieving robust, environment-independent perception systems.

Conclusions and Prospective Developments

The multi-modal lidar dataset presented constitutes a valuable tool for advancing research in the domains of autonomous driving and robotics. By enabling detailed benchmarking of SLAM algorithms across different sensor configurations and environments, it promotes the development of next-generation localization and mapping solutions. Future expansions of this dataset could explore additional environments and sensor configurations, further enriching its utility for the research community and fostering innovation at the intersection of perception, navigation, and robotics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Qingqing Li (13 papers)
  2. Xianjia Yu (19 papers)
  3. Jorge Peña Queralta (54 papers)
  4. Tomi Westerlund (62 papers)
Citations (22)
Github Logo Streamline Icon: https://streamlinehq.com