Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lidar with Velocity: Correcting Moving Objects Point Cloud Distortion from Oscillating Scanning Lidars by Fusion with Camera (2111.09497v3)

Published 18 Nov 2021 in cs.RO and cs.CV

Abstract: Lidar point cloud distortion from moving object is an important problem in autonomous driving, and recently becomes even more demanding with the emerging of newer lidars, which feature back-and-forth scanning patterns. Accurately estimating moving object velocity would not only provide a tracking capability but also correct the point cloud distortion with more accurate description of the moving object. Since lidar measures the time-of-flight distance but with a sparse angular resolution, the measurement is precise in the radial measurement but lacks angularly. Camera on the other hand provides a dense angular resolution. In this paper, Gaussian-based lidar and camera fusion is proposed to estimate the full velocity and correct the lidar distortion. A probabilistic Kalman-filter framework is provided to track the moving objects, estimate their velocities and simultaneously correct the point clouds distortions. The framework is evaluated on real road data and the fusion method outperforms the traditional ICP-based and point-cloud only method. The complete working framework is open-sourced (https://github.com/ISEE-Technology/lidar-with-velocity) to accelerate the adoption of the emerging lidars.

Citations (18)

Summary

  • The paper introduces a novel sensor fusion algorithm that combines lidar and camera data to correct distortions from moving objects.
  • It employs a Kalman filter to estimate full 3D velocity vectors, significantly boosting real-time tracking in autonomous systems.
  • The approach is validated on real-world road data, outperforming conventional methods and provided as an open-source framework.

Lidar with Velocity: Correcting Moving Objects Point Cloud Distortion from Oscillating Scanning Lidars by Fusion with Camera

The paper introduces a novel approach to address the distortion of point clouds caused by moving objects in oscillating lidar systems. This issue has gained prominence due to the adoption of oscillating lidar technologies, which offer unique challenges and opportunities in the domain of autonomous vehicles. The authors propose a fusion framework utilizing both lidar and camera data to effectively correct these distortions, yielding enhanced velocity estimation and object tracking capabilities.

The core of the proposed framework is its ability to leverage the complementary strengths of lidar and camera systems. Lidar, with its precise radial distance measurements, suffers from sparse angular resolution, while cameras provide dense angular information without direct distance measurements. The authors introduce a probabilistic sensor fusion methodology based on a Kalman filter, which integrates the velocity data from both sensors. This approach facilitates accurate real-time tracking and prediction of moving objects, crucial for reliable autonomous navigation.

Key Contributions:

  1. Innovative Distortion Correction: This is the first attempt to specifically address the distortion challenges posed by the emerging oscillating type lidars. The fusion algorithm capitalizes on the enhanced angular resolution provided by the camera to correct lidar point cloud distortions accurately.
  2. Full 3D Velocity Estimation: The framework successfully estimates the complete 3D velocity vector, which is combined within a Kalman filter system. This integration enhances the accuracy in moving object prediction and tracking.
  3. Open-source Framework: The authors offer a comprehensive, real-time capable solution from sensor detection to backend tracking. The framework is openly available, promoting wider research and adoption in industry.

The efficacy of the proposed system is underscored through evaluation on real-world road data, demonstrating superior performance compared to existing methods. By quantitively measuring the crispness of corrected point cloud data, the paper highlights substantial improvements in distortion correction, particularly in tangential movement scenarios. The integration of high-resolution camera data proves beneficial in scenarios where traditional lidar systems struggle, such as tangential and turning movements.

The practical implications of this research are significant. With the automotive industry's increasing reliance on sophisticated sensor suites for autonomous vehicles, the integration and enhancement of sensor capabilities, as demonstrated, are critical. As oscillating lidar systems are gradually adopted for their performance and cost benefits, ensuring accurate perception under these new modalities is imperative.

Future research can explore expanding the framework's application to various lidar configurations and diverse environmental conditions. Additionally, integrating dynamic object shape changes and leveraging machine learning for further enhancement of velocity predictions represent promising avenues. Collectively, this research moves towards the development of robust perception systems capable of navigating complex real-world environments with increased accuracy and reliability.

X Twitter Logo Streamline Icon: https://streamlinehq.com