Exposing the Unseen: Exposure Time Emulation for Offline Benchmarking of Vision Algorithms (2309.13139v3)
Abstract: Visual Odometry (VO) is one of the fundamental tasks in computer vision for robotics. However, its performance is deeply affected by High Dynamic Range (HDR) scenes, omnipresent outdoor. While new Automatic-Exposure (AE) approaches to mitigate this have appeared, their comparison in a reproducible manner is problematic. This stems from the fact that the behavior of AE depends on the environment, and it affects the image acquisition process. Consequently, AE has traditionally only been benchmarked in an online manner, making the experiments non-reproducible. To solve this, we propose a new methodology based on an emulator that can generate images at any exposure time. It leverages BorealHDR, a unique multi-exposure stereo dataset collected over 10 km, on 55 trajectories with challenging illumination conditions. Moreover, it includes lidar-inertial-based global maps with pose estimation for each image frame as well as Global Navigation Satellite System (GNSS) data, for comparison. We show that using these images acquired at different exposure times, we can emulate realistic images, keeping a Root-Mean-Square Error (RMSE) below 1.78 % compared to ground truth images. To demonstrate the practicality of our approach for offline benchmarking, we compared three state-of-the-art AE algorithms on key elements of Visual Simultaneous Localization And Mapping (VSLAM) pipeline, against four baselines. Consequently, reproducible evaluation of AE is now possible, speeding up the development of future approaches. Our code and dataset are available online at this link: https://github.com/norlab-ulaval/BorealHDR
- “Camera Attributes Control for Visual Odometry With Motion Blur Awareness” In IEEE/ASME Transactions on Mechatronics (TMECH) 28.4 Institute of ElectricalElectronics Engineers (IEEE), 2023, pp. 2225–2235 DOI: 10.1109/tmech.2023.3234316
- “Automated camera-exposure control for robust localization in varying illumination environments” In Autonomous Robots 46.4 Springer, 2022, pp. 515–534
- “Survey of Monocular SLAM Algorithms in Natural Environments” In Conference on Computer and Robot Vision (CRV), 2018
- “Learned Camera Gain and Exposure Control for Improved Visual Feature Detection and Matching” In IEEE Robotics and Automation Letters (RA-L) 6.2 IEEE, 2021, pp. 2028–2035 DOI: 10.1109/LRA.2021.3058909
- Zichao Zhang, Christian Forster and Davide Scaramuzza “Active exposure control for robust visual odometry in HDR environments” In IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 3894–3901
- Joowan Kim, Younggun Cho and Ayoung Kim “Proactive Camera Attribute Control Using Bayesian Optimization for Illumination-Resilient Visual Navigation” In IEEE Transactions on Robotics (T-RO) 36, 2020, pp. 1256–1271 DOI: 10.1109/TRO.2020.2985597
- “Auto-Exposure Algorithm for Enhanced Mobile Robot Localization in Challenging Light Conditions” In Sensors 22 MDPI, 2022, pp. 835 DOI: 10.3390/s22030835
- “Camera Exposure Control for Robust Robot Vision with Noise-Aware Image Quality Assessment” In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 1165–1172
- “Gradient-based camera exposure control for outdoor mobile platforms” In IEEE Transactions on Circuits and Systems for Video Technology 29 Institute of ElectricalElectronics Engineers Inc., 2019, pp. 1569–1583 DOI: 10.1109/TCSVT.2018.2846292
- Mohit Gupta, Daisuke Iso and Shree K. Nayar “Fibonacci Exposure Bracketing for High Dynamic Range Imaging” In IEEE International Conference on Computer Vision (ICCV), 2013 DOI: 10.1109/iccv.2013.186
- Inwook Shim, Joon-Young Lee and In So Kweon “Auto-adjusting camera exposure for outdoor robotics using gradient information” In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014, pp. 1011–1017
- Joowan Kim, Younggun Cho and Ayoung Kim “Exposure Control Using Bayesian Optimization Based on Entropy Weighted Image Gradient” In IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 857–864
- “Benefit of large field-of-view cameras for visual odometry” In IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 801–808
- “Learning-based image enhancement for visual odometry in challenging HDR environments” In IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 805–811
- “Sim2Real in robotics and automation: Applications and challenges” In IEEE transactions on automation science and engineering 18.2 IEEE, 2021, pp. 398–400
- Ishaan Mehta, Mingliang Tang and Timothy D. Barfoot “Gradient-Based Auto-Exposure Control Applied to a Self-Driving Car” In Conference on Computer and Robot Vision (CRV), 2020, pp. 166–173
- A. Geiger, P. Lenz and R. Urtasun “Are we ready for autonomous driving? The KITTI vision benchmark suite” In IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3354–3361
- “The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset” In IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 6433–6438 DOI: 10.1109/ICRA40945.2020.9196884
- Nicholas Carlevaris-Bianco and Ryan M. Eustice “Learning visual feature descriptors for dynamic lighting conditions” In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014, pp. 2769–2776 DOI: 10.1109/IROS.2014.6942941
- “The UMA-VI dataset: Visual–inertial odometry in low-textured and dynamic illumination environments” In The International Journal of Robotics Research 39 SAGE Publications Inc., 2020, pp. 1052–1060 DOI: 10.1177/0278364920938439
- “TartanDrive: A Large-Scale Dataset for Learning Off-Road Dynamics Models” In IEEE International Conference on Robotics and Automation (ICRA), 2022, pp. 2546–2552 DOI: 10.1109/ICRA46639.2022.9811648
- “FinnForest dataset: A forest landscape for visual SLAM” In Robotics and Autonomous Systems 132 Elsevier B.V., 2020, pp. 103610 DOI: 10.1016/j.robot.2020.103610
- Claude Elwood Shannon “A mathematical theory of communication” In The Bell system technical journal 27.3 Nokia Bell Labs, 1948, pp. 379–423
- Paul Bergmann, Rui Wang and Daniel Cremers “Online Photometric Calibration of Auto Exposure Video for Realtime Visual Odometry and SLAM” In IEEE Robotics and Automation Letters 3 Institute of ElectricalElectronics Engineers Inc., 2018, pp. 627–634 DOI: 10.1109/LRA.2017.2777002
- “Determining the camera response from images: What is knowable?” In IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 2003, pp. 1455–1467 DOI: 10.1109/TPAMI.2003.1240119
- J. Engel, V. Usenko and D. Cremers “A Photometrically Calibrated Benchmark For Monocular Visual Odometry” In arXiv:1607.02555, 2016
- “Kilometer-scale autonomous navigation in subarctic forests: challenges and lessons learned” In Field Robotics 2.1 Field Robotics Publication Society, 2022, pp. 1628–1660 DOI: 10.55417/fr.2022050
- Vladimír Kubelka, Maxime Vaidis and François Pomerleau “Gravity-constrained point cloud registration” In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 4873–4879
- “Targetless Calibration of LiDAR-IMU System Based on Continuous-time Batch Estimation” In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 9968–9975 DOI: 10.1109/IROS45743.2020.9341405
- David G Lowe “Object recognition from local scale-invariant features” In IEEE International Conference on Computer Vision, 1999, pp. 1150–1157
- Shaharyar Ahmed Khan Tareen and Zahra Saleem “A comparative analysis of sift, surf, kaze, akaze, orb, and brisk” In IEEE International conference on computing, mathematics and engineering technologies (iCoMET), 2018, pp. 1–10
- Daniel DeTone, Tomasz Malisiewicz and Andrew Rabinovich “Superpoint: Self-supervised interest point detection and description” In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 224–236
- David Nistér “An efficient solution to the five-point relative pose problem” In IEEE transactions on pattern analysis and machine intelligence 26.6 IEEE, 2004, pp. 756–770
- G. Bradski “The OpenCV Library” In Dr. Dobb’s Journal of Software Tools, 2000
- “A benchmark for the evaluation of RGB-D SLAM systems” In IEEE/RSJ international conference on intelligent robots and systems, 2012, pp. 573–580