- The paper introduces a novel sensor fusion algorithm that combines lidar and camera data to correct distortions from moving objects.
- It employs a Kalman filter to estimate full 3D velocity vectors, significantly boosting real-time tracking in autonomous systems.
- The approach is validated on real-world road data, outperforming conventional methods and provided as an open-source framework.
Lidar with Velocity: Correcting Moving Objects Point Cloud Distortion from Oscillating Scanning Lidars by Fusion with Camera
The paper introduces a novel approach to address the distortion of point clouds caused by moving objects in oscillating lidar systems. This issue has gained prominence due to the adoption of oscillating lidar technologies, which offer unique challenges and opportunities in the domain of autonomous vehicles. The authors propose a fusion framework utilizing both lidar and camera data to effectively correct these distortions, yielding enhanced velocity estimation and object tracking capabilities.
The core of the proposed framework is its ability to leverage the complementary strengths of lidar and camera systems. Lidar, with its precise radial distance measurements, suffers from sparse angular resolution, while cameras provide dense angular information without direct distance measurements. The authors introduce a probabilistic sensor fusion methodology based on a Kalman filter, which integrates the velocity data from both sensors. This approach facilitates accurate real-time tracking and prediction of moving objects, crucial for reliable autonomous navigation.
Key Contributions:
- Innovative Distortion Correction: This is the first attempt to specifically address the distortion challenges posed by the emerging oscillating type lidars. The fusion algorithm capitalizes on the enhanced angular resolution provided by the camera to correct lidar point cloud distortions accurately.
- Full 3D Velocity Estimation: The framework successfully estimates the complete 3D velocity vector, which is combined within a Kalman filter system. This integration enhances the accuracy in moving object prediction and tracking.
- Open-source Framework: The authors offer a comprehensive, real-time capable solution from sensor detection to backend tracking. The framework is openly available, promoting wider research and adoption in industry.
The efficacy of the proposed system is underscored through evaluation on real-world road data, demonstrating superior performance compared to existing methods. By quantitively measuring the crispness of corrected point cloud data, the paper highlights substantial improvements in distortion correction, particularly in tangential movement scenarios. The integration of high-resolution camera data proves beneficial in scenarios where traditional lidar systems struggle, such as tangential and turning movements.
The practical implications of this research are significant. With the automotive industry's increasing reliance on sophisticated sensor suites for autonomous vehicles, the integration and enhancement of sensor capabilities, as demonstrated, are critical. As oscillating lidar systems are gradually adopted for their performance and cost benefits, ensuring accurate perception under these new modalities is imperative.
Future research can explore expanding the framework's application to various lidar configurations and diverse environmental conditions. Additionally, integrating dynamic object shape changes and leveraging machine learning for further enhancement of velocity predictions represent promising avenues. Collectively, this research moves towards the development of robust perception systems capable of navigating complex real-world environments with increased accuracy and reliability.