Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking (2008.07196v1)

Published 17 Aug 2020 in cs.RO

Abstract: Multi-sensor fusion of multi-modal measurements from commodity inertial, visual and LiDAR sensors to provide robust and accurate 6DOF pose estimation holds great potential in robotics and beyond. In this paper, building upon our prior work (i.e., LIC-Fusion), we develop a sliding-window filter based LiDAR-Inertial-Camera odometry with online spatiotemporal calibration (i.e., LIC-Fusion 2.0), which introduces a novel sliding-window plane-feature tracking for efficiently processing 3D LiDAR point clouds. In particular, after motion compensation for LiDAR points by leveraging IMU data, low-curvature planar points are extracted and tracked across the sliding window. A novel outlier rejection criterion is proposed in the plane-feature tracking for high-quality data association. Only the tracked planar points belonging to the same plane will be used for plane initialization, which makes the plane extraction efficient and robust. Moreover, we perform the observability analysis for the LiDAR-IMU subsystem and report the degenerate cases for spatiotemporal calibration using plane features. While the estimation consistency and identified degenerate motions are validated in Monte-Carlo simulations, different real-world experiments are also conducted to show that the proposed LIC-Fusion 2.0 outperforms its predecessor and other state-of-the-art methods.

Citations (119)

Summary

  • The paper introduces a sliding-window plane-feature tracking algorithm to enhance multi-sensor fusion for accurate 6DOF pose estimation.
  • It performs online spatiotemporal calibration and observability analysis to mitigate degenerate states in the LiDAR-IMU subsystem.
  • Experimental results show that the system is computationally efficient and robust, outperforming previous methods in both simulations and real-world tests.

An Overview of LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking

The manuscript presents an innovative approach to multi-sensor fusion, specifically focusing on improving the accuracy and robustness of 6DOF pose estimation by integrating LiDAR, inertial, and camera data. This work builds upon the previously proposed LIC-Fusion framework, introducing an enhanced version known as LIC-Fusion 2.0. The key advancement in this extension is the incorporation of a sliding-window plane-feature tracking algorithm designed to efficiently manage 3D LiDAR point cloud data.

The authors propose a novel method for feature tracking by leveraging planar structures within the environment. After compensating for motion distortions using IMU data, the algorithm selectively tracks low-curvature planar points across multiple LiDAR scans contained within a sliding window. This approach significantly improves computational efficiency and robustness in plane extraction by using only tracked planar points for plane initialization.

Furthermore, the paper addresses the critical issue of online spatiotemporal calibration between the sensors. An observability analysis of the LiDAR-IMU subsystem is conducted to identify degenerate scenarios, which may lead to additional unobservable states. These scenarios are validated through Monte-Carlo simulations, and the results are corroborated by real-world experiments.

The experimental evaluation, conducted in both simulated environments and real-world datasets, demonstrates that LIC-Fusion 2.0 surpasses the original LIC-Fusion and other state-of-the-art methods in performance. The system's ability to maintain estimation consistency and accuracy was particularly noteworthy, especially in complex environments where lighting conditions or structural features might confound typical odometry solutions.

From a theoretical perspective, this work makes substantial contributions to the understanding of observability within multi-sensor fusion systems. The implications of these findings suggest that future developments could focus on extending this framework to accommodate additional sensor modalities or explore more nuanced features and environments. Additionally, the proposed method is conducive to real-time applications, given its computational efficiency and robustness.

Looking forward, incorporating sliding-window edge-feature tracking for LiDAR could potentially augment the capability of the system, facilitating even more comprehensive multi-modal fusion strategies. Such advancements could further consolidate the system's application in autonomous navigation, enhancing functionalities like autonomous driving and advanced robotics.

Youtube Logo Streamline Icon: https://streamlinehq.com