- The paper introduces a novel LiDAR-only framework that uses BEV images and advanced feature extraction (R2D2/SIFT) for robust outdoor localization.
- It demonstrates outstanding performance with sub-meter Average Trajectory Error in both single- and multi-session experiments across diverse sensor models.
- The system achieves efficient map storage (around 4 MB/km²) and maintains high accuracy in environments where GNSS and visual methods fail.
GroundLoc: Efficient Large-Scale Outdoor LiDAR-Only Localization
Introduction
GroundLoc presents a LiDAR-based localization framework designed for large-scale outdoor environments using LiDAR-only data. The approach aims to address conventional challenges faced by mobile robots in expansive environments, where GNSS systems fail, and visual localization techniques struggle due to scarce vertical features.
Figure 1: Overview of the proposed system. Input point cloud in red, intermediate results in blue, and output pose estimate in green.
Central to GroundLoc is the utilization of Bird's-Eye View (BEV) images, which capture and highlight crucial spatial data from sensor readings. The BEV images are processed using keypoint extraction methods such as R2D2 or traditional SIFT, ensuring effective feature identification even in dynamic environments. The compact and efficient raster maps, stored as 2D images, require minimal space for significant areas.
Methodology
BEV Image and Map Creation
The BEV images are crafted by GroundGrid, separating ground from non-ground points to reduce noise from transient entities. The images integrate three channels: intensity, slope, and variance, crucial for capturing terrain nuances and static structures. The maps, created from a single traversal, are augmented with ground truth and stored efficiently, averaging about 4 MB/km².
Localization Pipeline
The GroundLoc pipeline interpolates input LiDAR data into BEV images, followed by feature extraction using R2D2, a CNN-based extractor, or SIFT. Matcher methodologies leverage KD-Trees for initial pairing, refined through consensus approaches like Quatro, culminating in precise pose corrections.
Figure 2: Visualization of multi-session localization results on the HeLiPR Roundabout Ouster sequence of our method. The coloring of the trajectory indicates the translational deviation from the ground truth.
Experiments and Results
Single-Session and Multi-Session Evaluation
GroundLoc is validated on datasets like SemanticKITTI and HeLiPR, portraying impressive resilience in diverse environments using a variety of sensor models. In single-session scenarios, it registers an Average Trajectory Error (ATE) under 0.15 m, outperforming ICP and Fingerprint localization benchmarks.
GroundLoc excels in multi-session testing, maintaining sub-meter ATE across challenging sequences with Ouster, Aeva, and Livox sensors, despite varied data distribution and environmental conditions. The efficacy of R2D2-based feature extraction over SIFT is notable in sensors with sparse data or unconventional scanning patterns.
Analysis and Discussion
The study highlights GroundLoc's robustness against typical challenges in large-scale localization. By prioritizing efficient data representation and refined feature extraction, the system maintains accuracy and operational speed, crucial for real-time applications. Inter-sensor adaptability corroborated by the experiments points towards broader applicability of the framework.
Future extensions may explore seamless integration with additional sensory inputs like inertial measurements to further enhance localization precision in complex terrains. The open-source release encourages community-led advancements in solving persistent localization challenges.
Conclusion
GroundLoc emerges as a pivotal framework in mobile robotics, effectively addressing large-scale LiDAR-only localization with great precision and speed. Its contributions to efficient map storage and highly repeatable and reliable feature extraction are instrumental for future research and applications in autonomous systems. The source code is made available for further exploration and community use.