Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LiDAR-Camera Calibration using 3D-3D Point correspondences (1705.09785v1)

Published 27 May 2017 in cs.RO and cs.CV

Abstract: With the advent of autonomous vehicles, LiDAR and cameras have become an indispensable combination of sensors. They both provide rich and complementary data which can be used by various algorithms and machine learning to sense and make vital inferences about the surroundings. We propose a novel pipeline and experimental setup to find accurate rigid-body transformation for extrinsically calibrating a LiDAR and a camera. The pipeling uses 3D-3D point correspondences in LiDAR and camera frame and gives a closed form solution. We further show the accuracy of the estimate by fusing point clouds from two stereo cameras which align perfectly with the rotation and translation estimated by our method, confirming the accuracy of our method's estimates both mathematically and visually. Taking our idea of extrinsic LiDAR-camera calibration forward, we demonstrate how two cameras with no overlapping field-of-view can also be calibrated extrinsically using 3D point correspondences. The code has been made available as open-source software in the form of a ROS package, more information about which can be sought here: https://github.com/ankitdhall/lidar_camera_calibration .

Citations (146)

Summary

  • The paper introduces a calibration pipeline that utilizes 3D-3D point correspondences to accurately estimate rigid-body transformation parameters.
  • It employs the Kabsch algorithm to resolve sensor alignment, achieving translation errors of a few centimeters and low RMSE values.
  • The method effectively calibrates multiple cameras with non-overlapping views, enhancing sensor fusion in autonomous systems.

LiDAR-Camera Calibration using 3D-3D Point Correspondences

The research presented in this paper addresses the critical task of extrinsic calibration between LiDAR and camera systems, which are commonly employed in autonomous vehicles and other robotic platforms. The authors propose a novel pipeline that utilizes 3D-3D point correspondences to accurately estimate the rigid-body transformation necessary for such calibration.

Approach and Methodology

The paper introduces a method focused on deriving extrinsic calibration parameters by employing 3D-3D correspondences directly, in contrast to more conventional approaches that rely on 2D-3D correspondences. The setup involves obtaining 3D points in both the LiDAR and camera frames and solving for the transformation using a closed-form solution derived from these correspondences.

The method hinges on the use of markers attached to planar surfaces like cardboards, which are augmented with ArUco tags. These tags facilitate the calculation of the [Rt][R|t] transformation between camera and marker coordinates. The ICP algorithm is initially considered for solving this transformation; however, the authors employ the Kabsch algorithm as it efficiently deals with scenarios where point correspondences are explicitly known.

Key Results

The authors provide compelling evidence of the method's accuracy through extensive experiments involving different sensor configurations. The results include consistent and precise estimates of translation and rotation, with translation errors typically observed in the range of a few centimeters, and very low RMSE values when compared to manual measurements.

Furthermore, the paper convincingly demonstrates the utility of the proposed method in scenarios where multiple cameras with non-overlapping fields of view need to be calibrated. The paper describes a procedure to fuse point clouds resulting from the estimated transformations, ensuring near-perfect alignment.

Implications and Future Directions

Practically, this research presents a significant advancement in sensor calibration for autonomous systems. By addressing the challenge of calibrating cameras that may not share overlapping views, the paper extends the method's applicability across a broader range of systems and configurations.

Theoretically, the application of the Kabsch algorithm in this context illustrates its potential for similar problems in robotics where point correspondences can be assumed accurate. The comprehensive experimental validation adds to the method's credibility and invites further exploration into optimizing the robustness and efficiency of this calibration technique.

Looking forward, future developments could focus on enhancing the method's scalability and integrating it into dynamic systems where sensor configurations may change over time. Further exploration into automatic extraction of 3D point correspondences could also enhance its appeal for real-time applications.

Overall, the research provides a robust, repeatable solution to a complex problem, contributing valuable insights and tools to the field of robotics and sensor fusion.

Youtube Logo Streamline Icon: https://streamlinehq.com