- The paper introduces a sliding-window plane-feature tracking algorithm to enhance multi-sensor fusion for accurate 6DOF pose estimation.
- It performs online spatiotemporal calibration and observability analysis to mitigate degenerate states in the LiDAR-IMU subsystem.
- Experimental results show that the system is computationally efficient and robust, outperforming previous methods in both simulations and real-world tests.
An Overview of LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking
The manuscript presents an innovative approach to multi-sensor fusion, specifically focusing on improving the accuracy and robustness of 6DOF pose estimation by integrating LiDAR, inertial, and camera data. This work builds upon the previously proposed LIC-Fusion framework, introducing an enhanced version known as LIC-Fusion 2.0. The key advancement in this extension is the incorporation of a sliding-window plane-feature tracking algorithm designed to efficiently manage 3D LiDAR point cloud data.
The authors propose a novel method for feature tracking by leveraging planar structures within the environment. After compensating for motion distortions using IMU data, the algorithm selectively tracks low-curvature planar points across multiple LiDAR scans contained within a sliding window. This approach significantly improves computational efficiency and robustness in plane extraction by using only tracked planar points for plane initialization.
Furthermore, the paper addresses the critical issue of online spatiotemporal calibration between the sensors. An observability analysis of the LiDAR-IMU subsystem is conducted to identify degenerate scenarios, which may lead to additional unobservable states. These scenarios are validated through Monte-Carlo simulations, and the results are corroborated by real-world experiments.
The experimental evaluation, conducted in both simulated environments and real-world datasets, demonstrates that LIC-Fusion 2.0 surpasses the original LIC-Fusion and other state-of-the-art methods in performance. The system's ability to maintain estimation consistency and accuracy was particularly noteworthy, especially in complex environments where lighting conditions or structural features might confound typical odometry solutions.
From a theoretical perspective, this work makes substantial contributions to the understanding of observability within multi-sensor fusion systems. The implications of these findings suggest that future developments could focus on extending this framework to accommodate additional sensor modalities or explore more nuanced features and environments. Additionally, the proposed method is conducive to real-time applications, given its computational efficiency and robustness.
Looking forward, incorporating sliding-window edge-feature tracking for LiDAR could potentially augment the capability of the system, facilitating even more comprehensive multi-modal fusion strategies. Such advancements could further consolidate the system's application in autonomous navigation, enhancing functionalities like autonomous driving and advanced robotics.