Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automatic Target-Less Camera-LiDAR Calibration From Motion and Deep Point Correspondences (2404.17298v3)

Published 26 Apr 2024 in cs.RO

Abstract: Sensor setups of robotic platforms commonly include both camera and LiDAR as they provide complementary information. However, fusing these two modalities typically requires a highly accurate calibration between them. In this paper, we propose MDPCalib which is a novel method for camera-LiDAR calibration that requires neither human supervision nor any specific target objects. Instead, we utilize sensor motion estimates from visual and LiDAR odometry as well as deep learning-based 2D-pixel-to-3D-point correspondences that are obtained without in-domain retraining. We represent camera-LiDAR calibration as an optimization problem and minimize the costs induced by constraints from sensor motion and point correspondences. In extensive experiments, we demonstrate that our approach yields highly accurate extrinsic calibration parameters and is robust to random initialization. Additionally, our approach generalizes to a wide range of sensor setups, which we demonstrate by employing it on various robotic platforms including a self-driving perception car, a quadruped robot, and a UAV. To make our calibration method publicly accessible, we release the code on our project website at http://calibration.cs.uni-freiburg.de.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. L. Andresen, A. Brandemuehl, A. Honger, B. Kuan, N. Vödisch, H. Blum, V. Reijgwart, L. Bernreiter, L. Schaupp, J. J. Chung, M. Burki, M. R. Oswald, R. Siegwart, and A. Gawel, “Accurate mapping and planning for autonomous racing,” in IEEE/RSJ Intern. Conf. on Intelligent Robots and Systems, 2020, pp. 4743–4749.
  2. T. Liang, H. Xie, K. Yu, Z. Xia, Z. Lin, Y. Wang, T. Tang, B. Wang, and Z. Tang, “BEVFusion: A simple and robust LiDAR-camera fusion framework,” in Advances in Neural Information Processing Systems, vol. 35, 2022, pp. 10 421–10 434.
  3. C. Ge, J. Chen, E. Xie, Z. Wang, L. Hong, H. Lu, Z. Li, and P. Luo, “MetaBEV: Solving sensor failures for 3D detection and map segmentation,” in Intern. Conf. on Comp. Vision, October 2023, pp. 8721–8731.
  4. J. Schramm, N. Vödisch, K. Petek, R. B. Kiran, S. Yogamani, W. Burgard, and A. Valada, “BEVCar: Camera-radar fusion for BEV map and object segmentation,” arXiv preprint arXiv:2403.11761, 2024.
  5. A. Dhall, K. Chelani, V. Radhakrishnan, and K. M. Krishna, “LiDAR-camera calibration using 3D-3D point correspondences,” arXiv preprint arXiv:1705.09785, 2017.
  6. Q. Zhang and R. Pless, “Extrinsic calibration of a camera and laser range finder (improves camera calibration),” in IEEE/RSJ Intern. Conf. on Intelligent Robots and Systems, vol. 3, 2004, pp. 2301–2306.
  7. H. Vhavle, “ROS camera lidar calibration package,” 2021, https://github.com/heethesh/lidar˙camera˙calibration.
  8. C. Guindel, J. Beltrán, D. Martín, and F. García, “Automatic extrinsic calibration for lidar-stereo vehicle sensor setups,” in IEEE Intern. Conf. on Intel. Transp. Systems, 2017.
  9. E. Kim and S.-Y. Park, “Extrinsic calibration between camera and LiDAR sensors by matching multiple 3D planes,” Sensors, vol. 20, no. 1, p. 52, 2019.
  10. D. Zhang, L. Ma, Z. Gong, W. Tan, J. Zelek, and J. Li, “An overlap-free calibration method for LiDAR-camera platforms based on environmental perception,” IEEE Trans. on Instrumentation and Measurement, vol. 72, 2023.
  11. D. Tu, B. Wang, H. Cui, Y. Liu, and S. Shen, “Multi-camera-LiDAR auto-calibration by joint structure-from-motion,” in IEEE/RSJ Intern. Conf. on Intelligent Robots and Systems, 2022, pp. 2242–2249.
  12. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Intern. Conf. on Comp. Vision, vol. 1, 1999, pp. 666–673.
  13. M. Vel’as, M. Španěl, Z. Materna, and A. Herout, “Calibration of RGB camera with Velodyne LiDAR,” in Intern. Conf. in Central Europe on Comp. Graphics, Visualization and Comp. Vision, 2014, pp. 135–144.
  14. J. Ou, P. Huang, J. Zhou, Y. Zhao, and L. Lin, “Automatic extrinsic calibration of 3D LIDAR and multi-cameras based on graph optimization,” Sensors, vol. 22, no. 6, p. 2221, 2022.
  15. R. Kümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “g2o: A general framework for graph optimization,” in IEEE Intern. Conf. on Rob. and Autom., May 2011.
  16. Y. Liao, J. Li, S. Kang, Q. Li, G. Zhu, S. Yuan, Z. Dong, and B. Yang, “SE-Calib: Semantic edge-based LiDAR–camera boresight online calibration in urban scenes,” IEEE Trans. on Geoscience and Remote Sensing, vol. 61, 2023.
  17. J. Yin, F. Yan, Y. Liu, and Y. Zhuang, “Automatic and targetless LiDAR–camera extrinsic calibration using edge alignment,” Sensors, vol. 23, no. 17, pp. 19 871–19 880, 2023.
  18. K. Koide, S. Oishi, M. Yokozuka, and A. Banno, “General, single-shot, target-less, and automatic LiDAR-camera extrinsic calibration toolbox,” in IEEE Intern. Conf. on Rob. and Autom., 2023, pp. 11 301–11 307.
  19. T. Caselitz, B. Steder, M. Ruhnke, and W. Burgard, “Monocular camera localization in 3D LiDAR maps,” in IEEE/RSJ Intern. Conf. on Intelligent Robots and Systems, 2016, pp. 1926–1931.
  20. J. Zhu, J. Xue, and P. Zhang, “CalibDepth: Unifying depth map representation for iterative LiDAR-camera online calibration,” in IEEE Intern. Conf. on Rob. and Autom., 2023, pp. 726–733.
  21. J. Borer, J. Tschirner, F. Ölsner, and S. Milz, “From chaos to calibration: A geometric mutual information approach to target-free camera LiDAR extrinsic calibration,” in IEEE/CVF Winter Conf. on Applications of Comp. Vision, 2024, pp. 8394–8403.
  22. Z. Taylor and J. Nieto, “Motion-based calibration of multimodal sensor arrays,” in IEEE Intern. Conf. on Rob. and Autom., 2015, pp. 4843–4850.
  23. N. Schneider, F. Piewak, C. Stiller, and U. Franke, “RegNet: Multimodal sensor registration using deep neural networks,” in IEEE Intelligent Vehicles Symposium, 2017, pp. 1803–1810.
  24. D. Cattaneo and A. Valada, “CMRNext: Camera to LiDAR matching in the wild for localization and extrinsic calibration,” arXiv preprint arXiv:2402.00129, 2024.
  25. X. Lv, B. Wang, Z. Dou, D. Ye, and S. Wang, “LCCNet: LiDAR and camera self-calibration using cost volume network,” in IEEE/CVF Conf. on Comp. Vision and Pattern Recognition, 2021, pp. 2894–2901.
  26. C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. M. Montiel, and J. D. Tardós, “ORB-SLAM3: An accurate open-source library for visual, visual–inertial, and multimap SLAM,” IEEE Trans. on Rob., vol. 37, no. 6, pp. 1874–1890, 2021.
  27. N. Vödisch, D. Cattaneo, W. Burgard, and A. Valada, “CoVIO: Online continual learning for visual-inertial odometry,” in IEEE/CVF Conf. on Comp. Vision and Pattern Recognition Workshops, 2023.
  28. H. Zhan, C. S. Weerasekera, J. W. Bian, and I. Reid, “Visual odometry revisited: What should be learnt?” in IEEE Intern. Conf. on Rob. and Autom., 2020, pp. 4203–4210.
  29. W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast direct lidar-inertial odometry,” IEEE Trans. on Rob., vol. 38, no. 4, pp. 2053–2073, 2022.
  30. Y. Shiu and S. Ahmad, “Calibration of wrist-mounted robotic sensors by solving homogeneous transform equations of the form AX=XB,” IEEE Trans. on Rob. and Autom., vol. 5, no. 1, pp. 16–29, 1989.
  31. A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets rob.: The KITTI dataset,” Intern. Journal of Rob. Research, vol. 32, no. 11, pp. 1231–1237, 2013.
  32. D. Cattaneo, M. Vaghi, A. L. Ballardini, S. Fontana, D. G. Sorrenti, and W. Burgard, “CMRNet: Camera to LiDAR-map registration,” in IEEE Intern. Conf. on Intel. Transp. Systems, 2019, pp. 1283–1289.
  33. K. Yuan, Z. Guo, and Z. J. Wang, “RGGNet: Tolerance aware LiDAR-camera online calibration with geometric deep learning and generative model,” IEEE Rob. and Autom. Letters, vol. 5, no. 4, pp. 6956–6963, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kürsat Petek (10 papers)
  2. Niclas Vödisch (18 papers)
  3. Johannes Meyer (6 papers)
  4. Daniele Cattaneo (21 papers)
  5. Abhinav Valada (117 papers)
  6. Wolfram Burgard (149 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com