Tightly-Coupled LiDAR-Visual-Inertial SLAM and Large-Scale Volumetric Occupancy Mapping
Introduction
In autonomous navigation, precise localisation is essential, but so is an accurate representation of the 3D environment. Traditional SLAM (Simultaneous Localization And Mapping) systems that fuse different sensory inputs (such as stereo vision, Inertial Measurement Units (IMU), and Light Detection and Ranging (LiDAR) sensors) have shown promise in achieving accurate localisation. However, most current systems represent the 3D world in formats not immediately suitable for navigation and exploration tasks, which require knowledge of free space. In this paper, a novel approach integrating LiDAR, visual and inertial data in a tightly-coupled SLAM system is presented. The system produces globally consistent volumetric occupancy maps, enhancing both localisation accuracy and the practical utility of the generated maps for robotic navigation.
System Overview
The core innovation lies in the fusion of LiDAR, visual, and inertial measurements in a tightly-coupled SLAM system that also incorporates a volumetric mapping approach. The system leverages LiDAR data not only to improve localisation accuracy but also to update occupancy maps of the environment in real-time. A significant contribution is the introduction of novel LiDAR residuals based on occupancy fields and their gradients, enabling efficient addition of LiDAR data into the factor graph optimization without necessitating expensive data association steps.
Mapping Approach
The mapping module employs a submapping strategy to manage the scalability for large-scale environments, dividing the map into local submaps that are individually consistent. These submaps are then globally aligned and integrated into the SLAM system through novel frame-to-map and map-to-map optimization factors. This strategy not only contributes to maintaining the global consistency of the map but also improves the robustness and accuracy of the SLAM system by leveraging the volumetric information in the optimization process.
Experimental Results
The system was comprehensively evaluated on the HILTI 2022 SLAM Challenge, showing competitive performance in terms of localization accuracy against state-of-the-art methods. Additionally, the qualitative evaluation of the occupancy maps demonstrates their consistency and utility for navigation tasks. The system performs efficiently in real-time, with further enhancements achievable through parameter adjustments tailored to the processing capabilities of the deployment platform.
Conclusion and Future Work
This work introduces a state-of-the-art approach for tightly-coupled LiDAR-Visual-Inertial SLAM, capable of producing accurate, globally consistent volumetric maps. Future developments will focus on refining the uncertainty model for LiDAR measurements, enhancing robustness to difficult scenarios where visual tracking may fail, and expanding the framework to support autonomous exploration and navigation through dynamically generated submaps. This research represents a significant step forward in realizing fully autonomous robotic systems capable of navigating and understanding complex 3D environments in real-time.
Implications
The presented system has broad implications for the development of autonomous robotic navigation. By providing highly accurate localisation and a detailed, navigable map of the environment, robots can operate more effectively in complex, unstructured settings. This capability is crucial for a wide range of applications, including search and rescue operations in disaster-stricken areas, autonomous exploration in unknown territories, and sophisticated navigation tasks in industrial automation.