AsynEVO: Asynchronous Event-Driven Visual Odometry for Pure Event Streams (2402.16398v2)
Abstract: Event cameras are bio-inspired vision sensors that asynchronously measure per-pixel brightness changes.The high-temporal resolution and asynchronicity of event cameras offer great potential for estimating robot motion states. Recent works have adopted the continuous-time estimation methods to exploit the inherent nature of event cameras. However, existing methods either have poor runtime performance or neglect the high-temporal resolution of event cameras. To alleviate it, an Asynchronous Event-driven Visual Odometry (AsynEVO) based on sparse Gaussian Process (GP) regression is proposed to efficiently infer the motion trajectory from pure event streams. Concretely, an asynchronous frontend pipeline is designed to adapt event-driven feature tracking and manage feature trajectories; a parallel dynamic sliding-window backend is presented within the framework of sparse GP regression on $SE(3)$. Notably, a dynamic marginalization strategy is employed to ensure the consistency and sparsity of this GP regression. Experiments conducted on public datasets and real-world scenarios demonstrate that AsynEVO achieves competitive precision and superior robustness compared to the state-of-the-art.The experiment in the repeated-texture scenario indicates that the high-temporal resolution of AsynEVO plays a vital role in the estimation of high-speed movement. Furthermore, we show that the computational efficiency of AsynEVO significantly outperforms the incremental method.
- L. Yu, J. Qin, S. Wang, Y. Wang, and S. Wang, “A tightly coupled feature-based visual-inertial odometry with stereo cameras,” IEEE Trans. Ind. Electron., vol. 70, no. 4, pp. 3944–3954, 2023.
- R. Li, S. Wang, and D. Gu, “Deepslam: A robust monocular slam system with unsupervised deep learning,” IEEE Trans. Ind. Electron., vol. 68, no. 4, pp. 3577–3587, 2021.
- X. Zhao, Q. Li, C. Wang, H. Dou, and B. Liu, “Robust depth-aided visual-inertial-wheel odometry for mobile robots,” IEEE Trans. Ind. Electron., pp. 1–11, 2023.
- G. Gallego, T. Delbrück, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. J. Davison, J. Conradt, K. Daniilidis et al., “Event-based vision: A survey,” IEEE Trans. Pattern Anal. and Mach. Intell., vol. 44, no. 1, pp. 154–180, 2020.
- K. Huang, S. Zhang, J. Zhang, and D. Tao, “Event-based simultaneous localization and mapping: A comprehensive survey,” arXiv preprint arXiv:2304.09793, 2023.
- I. Alzugaray and M. Chli, “Haste: Multi-hypothesis asynchronous speeded-up tracking of events,” in Proc. Brit. Mach. Vis. Conf., 2020, p. 744.
- D. Gehrig, H. Rebecq, G. Gallego, and D. Scaramuzza, “Asynchronous, photometric feature tracking using events and frames,” in Proc. of the Eur. Conf. Comput. Vis., 2018, pp. 750–765.
- I. Alzugaray and M. Chli, “Asynchronous corner detection and tracking for event cameras in real time,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 3177–3184, 2018.
- E. Mueggler, G. Gallego, and D. Scaramuzza, “Continuous-time trajectory estimation for event-based vision sensors,” Robot., Sci. Syst., 2015.
- J. Wang and J. D. Gammell, “Event-based stereo visual odometry with native temporal resolution via continuous-time gaussian process regression,” IEEE Robot. Autom. Lett., vol. 8, no. 10, pp. 6707–6714, 2023.
- E. Mueggler, G. Gallego, H. Rebecq, and D. Scaramuzza, “Continuous-time visual-inertial odometry for event cameras,” IEEE Trans. Robot., vol. 34, no. 6, pp. 1425–1440, 2018.
- D. Liu, A. Parra, Y. Latif, B. Chen, T.-J. Chin, and I. Reid, “Asynchronous optimisation for event-based visual odometry,” in Proc. IEEE Int. Conf. Robot. Autom., 2022, pp. 9432–9438.
- T. Y. Tang, D. J. Yoon, and T. D. Barfoot, “A white-noise-on-jerk motion prior for continuous-time trajectory estimation on se(3),” IEEE Robot. Autom. Lett., vol. 4, no. 2, pp. 594–601, 2019.
- H. Rebecq, T. Horstschäfer, G. Gallego, and D. Scaramuzza, “Evo: A geometric approach to event-based 6-dof parallel tracking and mapping in real time,” IEEE Robot. Autom. Lett., vol. 2, no. 2, pp. 593–600, 2016.
- V. Vasco, A. Glover, and C. Bartolozzi, “Fast event-based harris corner detection exploiting the advantages of event-driven cameras,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2016, pp. 4144–4149.
- W. Guan, P. Chen, Y. Xie, and P. Lu, “Pl-evio: Robust monocular event-based visual inertial odometry with point and line features,” IEEE Trans. Autom. Sci. Eng., pp. 1–17, 2023.
- A. R. Vidal, H. Rebecq, T. Horstschaefer, and D. Scaramuzza, “Ultimate slam? combining events, images, and imu for robust visual slam in hdr and high-speed scenarios,” IEEE Robot. Autom. Lett., vol. 3, no. 2, pp. 994–1001, 2018.
- H. Rebecq, T. Horstschaefer, and D. Scaramuzza, “Real-time visual-inertial odometry for event cameras using keyframe-based nonlinear optimization,” in Proc. Brit. Mach. Vis. Conf., Sep. 2017, pp. 1–12.
- P. Chen, W. Guan, and P. Lu, “Esvio: Event-based stereo visual inertial odometry,” IEEE Robot. Autom. Lett., vol. 8, no. 6, pp. 3661–3668, 2023.
- Y. Zhou, G. Gallego, and S. Shen, “Event-based stereo visual odometry,” IEEE Trans. Robot., vol. 37, no. 5, pp. 1433–1450, 2021.
- Y.-F. Zuo, J. Yang, J. Chen, X. Wang, Y. Wang, and L. Kneip, “Devo: Depth-event camera visual odometry in challenging conditions,” in Proc. IEEE Int. Conf. Robot. Autom., 2022, pp. 2179–2185.
- M. Gehrig, S. B. Shrestha, D. Mouritzen, and D. Scaramuzza, “Event-based angular velocity regression with spiking networks,” in Proc. IEEE Int. Conf. Robot. Autom., 2020, pp. 4195–4202.
- A. Z. Zhu, L. Yuan, K. Chaney, and K. Daniilidis, “Unsupervised event-based learning of optical flow, depth, and egomotion,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit, 2019, pp. 989–997.
- F. Paredes-Vallés, K. Y. Scheper, and G. C. De Croon, “Unsupervised learning of a hierarchical spiking neural network for optical flow estimation: From events to global motion perception,” IEEE Trans. Pattern Anal. and Mach. Intell, vol. 42, no. 8, pp. 2051–2064, 2019.
- S. Anderson, K. MacTavish, and T. D. Barfoot, “Relative continuous-time slam,” Int. J. Robot. Res., vol. 34, no. 12, pp. 1453–1479, 2015.
- J. Dong, M. Mukadam, B. Boots, and F. Dellaert, “Sparse gaussian processes on matrix lie groups: A unified framework for optimizing continuous-time trajectories,” in Proc. IEEE Int. Conf. Robot. Autom., 2018, pp. 6497–6504.
- Y. Wu, D. J. Yoon, K. Burnett, S. Kammel, Y. Chen, H. Vhavle, and T. D. Barfoot, “Picking up speed: Continuous-time lidar-only odometry using doppler velocity measurements,” IEEE Robot. Autom. Lett., vol. 8, no. 1, pp. 264–271, 2022.
- D. Hug, P. Bänninger, I. Alzugaray, and M. Chli, “Continuous-time stereo-inertial odometry,” IEEE Robot. Autom. Lett., vol. 7, no. 3, pp. 6455–6462, 2022.
- S. Anderson, T. D. Barfoot, C. H. Tong, and S. Särkkä, “Batch nonlinear continuous-time trajectory estimation as exactly sparse gaussian process regression,” Auton. Robots, vol. 39, pp. 221–238, 2015.
- J. Lv, X. Lang, J. Xu, M. Wang, Y. Liu, and X. Zuo, “Continuous-time fixed-lag smoothing for lidar-inertial-camera slam,” IEEE/ASME Trans. Mechatronics, vol. 28, no. 4, pp. 2259–2270, 2023.
- F. Dellaert and M. Kaess, “Factor graphs for robot perception,” Foundations and Trends in Robot., vol. 6, no. 1-2, pp. 1–139, 2017.
- V. Usenko, N. Demmel, D. Schubert, J. Stückler, and D. Cremers, “Visual-inertial mapping with non-linear factor recovery,” IEEE Robot. Autom. Lett., vol. 5, no. 2, pp. 422–429, 2019.
- H. Rebecq, D. Gehrig, and D. Scaramuzza, “Esim: an open event camera simulator,” in Proc. Conf. Robot Learn., Oct. 2018, pp. 969–982.
- E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, and D. Scaramuzza, “The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and slam,” Int. J. Robot. Res., vol. 36, no. 2, pp. 142–149, 2017.
- A. Z. Zhu, D. Thakur, T. Özaslan, B. Pfrommer, V. Kumar, and K. Daniilidis, “The multivehicle stereo event camera dataset: An event camera dataset for 3d perception,” IEEE Robot. Autom. Lett., vol. 3, no. 3, pp. 2032–2039, 2018.