Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Odometry and Mapping for Multi-LiDAR Systems with Online Extrinsic Calibration (2010.14294v2)

Published 27 Oct 2020 in cs.RO and cs.CV

Abstract: Combining multiple LiDARs enables a robot to maximize its perceptual awareness of environments and obtain sufficient measurements, which is promising for simultaneous localization and mapping (SLAM). This paper proposes a system to achieve robust and simultaneous extrinsic calibration, odometry, and mapping for multiple LiDARs. Our approach starts with measurement preprocessing to extract edge and planar features from raw measurements. After a motion and extrinsic initialization procedure, a sliding window-based multi-LiDAR odometry runs onboard to estimate poses with online calibration refinement and convergence identification. We further develop a mapping algorithm to construct a global map and optimize poses with sufficient features together with a method to model and reduce data uncertainty. We validate our approach's performance with extensive experiments on ten sequences (4.60km total length) for the calibration and SLAM and compare them against the state-of-the-art. We demonstrate that the proposed work is a complete, robust, and extensible system for various multi-LiDAR setups. The source code, datasets, and demonstrations are available at https://ram-lab.com/file/site/m-loam.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jianhao Jiao (41 papers)
  2. Haoyang Ye (27 papers)
  3. Yilong Zhu (16 papers)
  4. Ming Liu (421 papers)
Citations (98)

Summary

Overview of "Robust Odometry and Mapping for Multi-LiDAR Systems with Online Extrinsic Calibration"

The paper "Robust Odometry and Mapping for Multi-LiDAR Systems with Online Extrinsic Calibration" addresses the challenges of achieving high-precision simultaneous localization and mapping (SLAM) in multi-LiDAR setups. Traditional single-LiDAR systems often face limitations due to data sparsity and a restricted field of view (FOV), particularly when deployed in complex environments. This research introduces a comprehensive SLAM framework, referred to as M-LOAM, which overcomes these challenges and offers robust extrinsic calibration and mapping capabilities.

Key Contributions

  1. Automatic Initialization and Calibration:
    • The framework introduces an automatic procedure to initialize the system's motion and extrinsic parameters without requiring explicit human intervention or prior knowledge of the sensor layout. The extrinsic calibration leverages motion-based techniques to derive initial estimates, which are refined through a subsequent optimization process.
  2. Sliding Window Multi-LiDAR Odometry:
    • The authors propose a sliding window-based approach to estimate odometry, which leverages geometric features extracted from multiple LiDARs to improve the precision of pose estimation. This approach significantly reduces drift by exploiting inter-sensor data fusion over multiple frames.
  3. Online Calibration with Convergence Monitoring:
    • M-LOAM provides an online calibration mechanism that continuously refines the extrinsics. The system detects converged calibration states using the degeneracy factor, maintaining robustness across varied trajectories and environments.
  4. Uncertainty Propagation:
    • The proposed method models and integrates uncertainties related to sensor noise, pose estimation, and extrinsic perturbations. This modeling extends to mapping, where transformed LiDAR data points are associated with uncertainty measures, improving map consistency and reliability.
  5. Uncertainty-Aware Mapping:
    • The mapping component builds a global map that captures and accounts for uncertainties, ensuring robustness against measurement noise and pose ambiguities. By selecting more reliable data points, the mapping process achieves higher fidelity over extensive navigation tasks.

Experimental Validation

The paper rigorously tests the M-LOAM framework on three platforms: a simulated robot, a handheld device, and an autonomous vehicle equipped with multiple LiDARs. Results across these platforms demonstrate the system's ability to achieve centimeter-level calibration accuracy and low pose drift in SLAM applications. The paper benchmarks M-LOAM against state-of-the-art methods, showing superior performance in terms of accuracy and map consistency, especially in complex scenarios such as urban environments and poorly constrained indoor spaces.

Implications and Future Directions

The research underlines the practical significance of multi-LiDAR systems in enhancing SLAM capabilities for robotic applications, especially in autonomous vehicles. The presented approach sets a precedent for future advances in sensor fusion algorithms that require minimal calibration overhead while maximizing environmental perception.

Future work could explore integrating higher-level semantic features for increased robustness in dynamic and cluttered environments, potentially looking into deep learning-derived features to improve SLAM under various conditions. Additionally, extending such frameworks to other sensor modalities like radars and cameras could broaden the applicability of robust multi-modal SLAM solutions.

Youtube Logo Streamline Icon: https://streamlinehq.com