Papers
Topics
Authors
Recent
2000 character limit reached

GroundLoc: Efficient Large-Scale Outdoor LiDAR-Only Localization

Published 28 Oct 2025 in cs.RO and cs.CV | (2510.24623v1)

Abstract: In this letter, we introduce GroundLoc, a LiDAR-only localization pipeline designed to localize a mobile robot in large-scale outdoor environments using prior maps. GroundLoc employs a Bird's-Eye View (BEV) image projection focusing on the perceived ground area and utilizes the place recognition network R2D2, or alternatively, the non-learning approach Scale-Invariant Feature Transform (SIFT), to identify and select keypoints for BEV image map registration. Our results demonstrate that GroundLoc outperforms state-of-the-art methods on the SemanticKITTI and HeLiPR datasets across various sensors. In the multi-session localization evaluation, GroundLoc reaches an Average Trajectory Error (ATE) well below 50 cm on all Ouster OS2 128 sequences while meeting online runtime requirements. The system supports various sensor models, as evidenced by evaluations conducted with Velodyne HDL-64E, Ouster OS2 128, Aeva Aeries II, and Livox Avia sensors. The prior maps are stored as 2D raster image maps, which can be created from a single drive and require only 4 MB of storage per square kilometer. The source code is available at https://github.com/dcmlr/groundloc.

Summary

  • The paper introduces a novel LiDAR-only framework that uses BEV images and advanced feature extraction (R2D2/SIFT) for robust outdoor localization.
  • It demonstrates outstanding performance with sub-meter Average Trajectory Error in both single- and multi-session experiments across diverse sensor models.
  • The system achieves efficient map storage (around 4 MB/km²) and maintains high accuracy in environments where GNSS and visual methods fail.

GroundLoc: Efficient Large-Scale Outdoor LiDAR-Only Localization

Introduction

GroundLoc presents a LiDAR-based localization framework designed for large-scale outdoor environments using LiDAR-only data. The approach aims to address conventional challenges faced by mobile robots in expansive environments, where GNSS systems fail, and visual localization techniques struggle due to scarce vertical features. Figure 1

Figure 1: Overview of the proposed system. Input point cloud in red, intermediate results in blue, and output pose estimate in green.

Central to GroundLoc is the utilization of Bird's-Eye View (BEV) images, which capture and highlight crucial spatial data from sensor readings. The BEV images are processed using keypoint extraction methods such as R2D2 or traditional SIFT, ensuring effective feature identification even in dynamic environments. The compact and efficient raster maps, stored as 2D images, require minimal space for significant areas.

Methodology

BEV Image and Map Creation

The BEV images are crafted by GroundGrid, separating ground from non-ground points to reduce noise from transient entities. The images integrate three channels: intensity, slope, and variance, crucial for capturing terrain nuances and static structures. The maps, created from a single traversal, are augmented with ground truth and stored efficiently, averaging about 4 MB/km².

Localization Pipeline

The GroundLoc pipeline interpolates input LiDAR data into BEV images, followed by feature extraction using R2D2, a CNN-based extractor, or SIFT. Matcher methodologies leverage KD-Trees for initial pairing, refined through consensus approaches like Quatro, culminating in precise pose corrections. Figure 2

Figure 2: Visualization of multi-session localization results on the HeLiPR Roundabout Ouster sequence of our method. The coloring of the trajectory indicates the translational deviation from the ground truth.

Experiments and Results

Single-Session and Multi-Session Evaluation

GroundLoc is validated on datasets like SemanticKITTI and HeLiPR, portraying impressive resilience in diverse environments using a variety of sensor models. In single-session scenarios, it registers an Average Trajectory Error (ATE) under 0.15 m, outperforming ICP and Fingerprint localization benchmarks.

GroundLoc excels in multi-session testing, maintaining sub-meter ATE across challenging sequences with Ouster, Aeva, and Livox sensors, despite varied data distribution and environmental conditions. The efficacy of R2D2-based feature extraction over SIFT is notable in sensors with sparse data or unconventional scanning patterns.

Analysis and Discussion

The study highlights GroundLoc's robustness against typical challenges in large-scale localization. By prioritizing efficient data representation and refined feature extraction, the system maintains accuracy and operational speed, crucial for real-time applications. Inter-sensor adaptability corroborated by the experiments points towards broader applicability of the framework.

Future extensions may explore seamless integration with additional sensory inputs like inertial measurements to further enhance localization precision in complex terrains. The open-source release encourages community-led advancements in solving persistent localization challenges.

Conclusion

GroundLoc emerges as a pivotal framework in mobile robotics, effectively addressing large-scale LiDAR-only localization with great precision and speed. Its contributions to efficient map storage and highly repeatable and reliable feature extraction are instrumental for future research and applications in autonomous systems. The source code is made available for further exploration and community use.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.