Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Joint Camera Intrinsic and LiDAR-Camera Extrinsic Calibration (2202.13708v3)

Published 28 Feb 2022 in cs.RO

Abstract: Sensor-based environmental perception is a crucial step for autonomous driving systems, for which an accurate calibration between multiple sensors plays a critical role. For the calibration of LiDAR and camera, the existing method is generally to calibrate the intrinsic of the camera first and then calibrate the extrinsic of the LiDAR and camera. If the camera's intrinsic is not calibrated correctly in the first stage, it isn't easy to calibrate the LiDAR-camera extrinsic accurately. Due to the complex internal structure of the camera and the lack of an effective quantitative evaluation method for the camera's intrinsic calibration, in the actual calibration, the accuracy of extrinsic parameter calibration is often reduced due to the tiny error of the camera's intrinsic parameters. To this end, we propose a novel target-based joint calibration method of the camera intrinsic and LiDAR-camera extrinsic parameters. Firstly, we design a novel calibration board pattern, adding four circular holes around the checkerboard for locating the LiDAR pose. Subsequently, a cost function defined under the reprojection constraints of the checkerboard and circular holes features is designed to solve the camera's intrinsic parameters, distortion factor, and LiDAR-camera extrinsic parameter. In the end, quantitative and qualitative experiments are conducted in actual and simulated environments, and the result shows the proposed method can achieve accuracy and robustness performance. The open-source code is available at https://github.com/OpenCalib/JointCalib.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Guohang Yan (13 papers)
  2. Feiyu He (1 paper)
  3. Chunlei Shi (2 papers)
  4. Xinyu Cai (26 papers)
  5. Yikang Li (64 papers)
Citations (33)

Summary

  • The paper introduces a joint calibration method that simultaneously optimizes camera intrinsics and LiDAR-camera extrinsics to reduce error propagation.
  • It features a custom calibration board with a checkerboard pattern and circular markers, using a reprojection-based cost function for accurate parameter estimation.
  • Experimental results show improved calibration accuracy, enhancing sensor fusion and environmental perception in autonomous driving systems.

Joint Camera Intrinsic and LiDAR-Camera Extrinsic Calibration: An Overview

The paper "Joint Camera Intrinsic and LiDAR-Camera Extrinsic Calibration" addresses an important aspect of autonomous driving systems: sensor calibration. The research highlights the paramount role of precise calibration between LiDAR and camera sensors in achieving accurate environmental perception, which is crucial for the success of autonomous vehicles. Existing methods predominantly apply a sequential calibration approach—determining camera intrinsics first followed by LiDAR-camera extrinsics—which potentially propagates errors from the intrinsic calibration phase to the extrinsic phase. The presented work proposes a novel methodology that jointly optimizes both intrinsic and extrinsic parameters to ameliorate such cascading inaccuracies.

Methodology

The authors introduce a target-based joint calibration method featuring a newly designed calibration board. This board is characterized by a central checkerboard pattern used for initial intrinsic calibration and circular holes to precisely locate LiDAR positions. The calibration method is powered by a cost function grounded in reprojection constraints, which concurrently optimizes camera intrinsics, distortion parameters, and LiDAR-camera extrinsics. This holistic approach distinguishes itself by not relying on pre-computed or assumed camera intrinsics, thereby reducing error propagation.

Quantitative and qualitative experiments were performed both in controlled environments and simulated settings. The constructed experiments revealed a robust performance by the proposed calibration method, signifying its utility and reliability.

Results and Implications

Numerical results underscore the proficiency of the joint calibration approach. The quantitative analysis particularly emphasized the improved accuracy in extrinsic parameters when compared with traditional multi-stage techniques. Furthermore, the simulated experiments, benefiting from accurately known ground truth values, provided compelling evidence towards the alignment accuracy improvements achieved by the authors' method.

In practical terms, this research has notable implications for autonomous driving systems. By enhancing calibration accuracy, the proposed method contributes to more reliable sensor fusion, leading to better perception and decision-making capabilities in autonomous vehicles. The joint optimization strategy represents a departure from traditional sequential calibration methods, potentially influencing future developments in sensor calibration practices across various industries.

Future Directions

The paper opens suggestions for future exploration, including leveraging vehicle motion to reduce point cloud sparsity captured by the LiDAR, which can further enhance the circle detection accuracy. Additionally, the calibration method could be expanded to apply to different types of LiDAR technologies, encompassing varied scanning mechanisms and densities.

This research exemplifies a methodological advancement in sensor calibration for autonomous vehicles, furnishing the field with insights for reducing calibration errors that are pivotal for robust environmental perception. The codes and methodologies, made publicly accessible, offer a framework that other researchers and practitioners can build upon or integrate within broader sensor fusion systems.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub