OmniColor: A Global Camera Pose Optimization Approach of LiDAR-360Camera Fusion for Colorizing Point Clouds (2404.04693v2)
Abstract: A Colored point cloud, as a simple and efficient 3D representation, has many advantages in various fields, including robotic navigation and scene reconstruction. This representation is now commonly used in 3D reconstruction tasks relying on cameras and LiDARs. However, fusing data from these two types of sensors is poorly performed in many existing frameworks, leading to unsatisfactory mapping results, mainly due to inaccurate camera poses. This paper presents OmniColor, a novel and efficient algorithm to colorize point clouds using an independent 360-degree camera. Given a LiDAR-based point cloud and a sequence of panorama images with initial coarse camera poses, our objective is to jointly optimize the poses of all frames for mapping images onto geometric reconstructions. Our pipeline works in an off-the-shelf manner that does not require any feature extraction or matching process. Instead, we find optimal poses by directly maximizing the photometric consistency of LiDAR maps. In experiments, we show that our method can overcome the severe visual distortion of omnidirectional images and greatly benefit from the wide field of view (FOV) of 360-degree cameras to reconstruct various scenarios with accuracy and stability. The code will be released at https://github.com/liubonan123/OmniColor/.
- E. Nocerino, F. Poiesi, A. Locher, Y. T. Tefera, F. Remondino, P. Chippendale, and L. Van Gool, “3d reconstruction with a collaborative approach based on smartphones and a cloud-based server,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 42, no. W8, pp. 187–194, 2017.
- R. Hussain, M. Pizzo, G. Ballestin, M. Chessa, and F. Solari, “Experimental validation of photogrammetry based 3d reconstruction software,” in 2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS). IEEE, 2022, pp. 1–6.
- J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in real-time.” in Robotics: Science and systems, vol. 2, no. 9. Berkeley, CA, 2014, pp. 1–9.
- T. Shan and B. Englot, “Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 4758–4765.
- P. Vechersky, M. Cox, P. Borges, and T. Lowe, “Colourising point clouds using independent cameras,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3575–3582, 2018.
- J. Lin and F. Zhang, “R33{}^{3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPT LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 10 672–10 678.
- L. Jiarong and Z. Fu, “R33{}^{3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPT LIVE++: A Robust, Real-time, Radiance reconstruction package with a tightly-coupled LiDAR-Inertial-Visual state Estimator,” arXiv preprint arXiv:2209.03666, 2022.
- C. Zheng, Q. Zhu, W. Xu, X. Liu, Q. Guo, and F. Zhang, “Fast-livo: Fast and tightly-coupled sparse-direct lidar-inertial-visual odometry,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022, pp. 4003–4009.
- W. Zhen, Y. Hu, J. Liu, and S. Scherer, “A joint optimization approach of lidar-camera fusion for accurate dense 3-d reconstructions,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3585–3592, 2019.
- S. Sumikura, M. Shibuya, and K. Sakurada, “Openvslam: A versatile visual slam framework,” in Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 2292–2295.
- H. Huang and S.-K. Yeung, “360vo: Visual odometry using a single 360 camera,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 5594–5600.
- J. Cui and S. Schwertfeger, “Cp+: Camera poses augmentation with large-scale lidar maps,” in 2022 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, 2022, pp. 69–74.
- W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “Fast-lio2: Fast direct lidar-inertial odometry,” IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2053–2073, 2022.
- C. Yuan, X. Liu, X. Hong, and F. Zhang, “Pixel-level extrinsic self calibration of high resolution lidar and camera in targetless environments,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7517–7524, 2021.
- Z. Miao, B. He, W. Xie, W. Zhao, X. Huang, J. Bai, and X. Hong, “Coarse-to-fine hybrid 3d mapping system with co-calibrated omnidirectional camera and non-repetitive lidar,” IEEE Robotics and Automation Letters, vol. 8, no. 3, pp. 1778–1785, 2023.
- X. Liu, C. Yuan, and F. Zhang, “Targetless extrinsic calibration of multiple small fov lidars and cameras using adaptive voxelization,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–12, 2022.
- T. Shan, B. Englot, C. Ratti, and D. Rus, “Lvi-sam: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping,” in 2021 IEEE international conference on robotics and automation (ICRA). IEEE, 2021, pp. 5692–5698.
- J. Lin, C. Zheng, W. Xu, and F. Zhang, “R 2222 live: A robust, real-time, lidar-inertial-visual tightly-coupled state estimator and mapping,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7469–7476, 2021.
- J. Zhang and S. Singh, “Visual-lidar odometry and mapping: Low-drift, robust, and fast,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015, pp. 2174–2181.
- W. Wang, J. Liu, C. Wang, B. Luo, and C. Zhang, “Dv-loam: Direct visual lidar odometry and mapping,” Remote Sensing, vol. 13, no. 16, p. 3340, 2021.
- W. Xu and F. Zhang, “Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3317–3324, 2021.
- Q.-Y. Zhou and V. Koltun, “Color map optimization for 3d reconstruction with consumer depth cameras,” ACM Transactions on Graphics (ToG), vol. 33, no. 4, pp. 1–10, 2014.
- C. Bai, T. Xiao, Y. Chen, H. Wang, F. Zhang, and X. Gao, “Faster-lio: Lightweight tightly coupled lidar-inertial odometry using parallel sparse incremental voxels,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4861–4868, 2022.
- Z. Liu and F. Zhang, “Balm: Bundle adjustment for lidar mapping,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3184–3191, 2021.
- Z. Liu, X. Liu, and F. Zhang, “Efficient and consistent bundle adjustment on lidar point clouds,” arXiv preprint arXiv:2209.08854, 2022.
- C. Yuan, W. Xu, X. Liu, X. Hong, and F. Zhang, “Efficient and probabilistic adaptive voxel mapping for accurate online lidar odometry,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 8518–8525, 2022.
- C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874–1890, 2021.
- E. Ong, W. Lin, Z. Lu, X. Yang, S. Yao, F. Pan, L. Jiang, and F. Moschetti, “A no-reference quality metric for measuring image blur,” in Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings., vol. 1. Ieee, 2003, pp. 469–472.
- S. Katz and A. Tal, “On the visibility of point clouds,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1350–1358.
- R. M. e Silva, C. Esperança, and A. Oliveira, “Efficient hpr-based rendering of point clouds,” in 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images. IEEE, 2012, pp. 126–133.
- J. Kim, C. Choi, H. Jang, and Y. M. Kim, “Piccolo: point cloud-centric omnidirectional localization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3313–3323.
- D. Lee, S. Ryu, S. Yeon, Y. Lee, D. Kim, C. Han, Y. Cabon, P. Weinzaepfel, N. Guérin, G. Csurka et al., “Large-scale localization datasets in crowded indoor spaces,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3227–3236.
- Bonan Liu (2 papers)
- Guoyang Zhao (11 papers)
- Jianhao Jiao (41 papers)
- Guang Cai (1 paper)
- Chengyang Li (22 papers)
- Handi Yin (3 papers)
- Yuyang Wang (111 papers)
- Ming Liu (421 papers)
- Pan Hui (155 papers)