Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CMax-SLAM: Event-based Rotational-Motion Bundle Adjustment and SLAM System using Contrast Maximization (2403.08119v1)

Published 12 Mar 2024 in cs.RO and cs.CV

Abstract: Event cameras are bio-inspired visual sensors that capture pixel-wise intensity changes and output asynchronous event streams. They show great potential over conventional cameras to handle challenging scenarios in robotics and computer vision, such as high-speed and high dynamic range. This paper considers the problem of rotational motion estimation using event cameras. Several event-based rotation estimation methods have been developed in the past decade, but their performance has not been evaluated and compared under unified criteria yet. In addition, these prior works do not consider a global refinement step. To this end, we conduct a systematic study of this problem with two objectives in mind: summarizing previous works and presenting our own solution. First, we compare prior works both theoretically and experimentally. Second, we propose the first event-based rotation-only bundle adjustment (BA) approach. We formulate it leveraging the state-of-the-art Contrast Maximization (CMax) framework, which is principled and avoids the need to convert events into frames. Third, we use the proposed BA to build CMax-SLAM, the first event-based rotation-only SLAM system comprising a front-end and a back-end. Our BA is able to run both offline (trajectory smoothing) and online (CMax-SLAM back-end). To demonstrate the performance and versatility of our method, we present comprehensive experiments on synthetic and real-world datasets, including indoor, outdoor and space scenarios. We discuss the pitfalls of real-world evaluation and propose a proxy for the reprojection error as the figure of merit to evaluate event-based rotation BA methods. We release the source code and novel data sequences to benefit the community. We hope this work leads to a better understanding and fosters further research on event-based ego-motion estimation. Project page: https://github.com/tub-rip/cmax_slam

Definition Search Book Streamline Icon: https://streamlinehq.com
References (67)
  1. P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×\times×128 120 dB 15 μ𝜇\muitalic_μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits, vol. 43, no. 2, pp. 566–576, 2008.
  2. T. Finateu, A. Niwa, D. Matolin, K. Tsuchimoto, A. Mascheroni, E. Reynaud, P. Mostafalu, F. Brady, L. Chotard, F. LeGoff, H. Takahashi, H. Wakabayashi, Y. Oike, and C. Posch, “A 1280x720 back-illuminated stacked temporal contrast event-based vision sensor with 4.86μ𝜇\muitalic_μm pixels, 1.066Geps readout, programmable event-rate controller and compressive data-formatting pipeline,” in IEEE Int. Solid-State Circuits Conf. (ISSCC), 2020, pp. 112–114.
  3. G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-based vision: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 1, pp. 154–180, 2022.
  4. D. Weikersdorfer and J. Conradt, “Event-based particle filtering for robot self-localization,” in IEEE Int. Conf. Robot. Biomimetics (ROBIO), 2012, pp. 866–870.
  5. G. Gallego, J. E. A. Lund, E. Mueggler, H. Rebecq, T. Delbruck, and D. Scaramuzza, “Event-based, 6-DOF camera tracking from photometric depth maps,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 10, pp. 2402–2412, Oct. 2018.
  6. G. Gallego and D. Scaramuzza, “Accurate angular velocity estimation with an event camera,” IEEE Robot. Autom. Lett., vol. 2, no. 2, pp. 632–639, 2017.
  7. S. Bryner, G. Gallego, H. Rebecq, and D. Scaramuzza, “Event-based, direct camera tracking from a photometric 3D map using nonlinear optimization,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2019.
  8. A. Z. Zhu, L. Yuan, K. Chaney, and K. Daniilidis, “Unsupervised event-based learning of optical flow, depth, and egomotion,” in IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2019, pp. 989–997.
  9. W. Chamorro, J. Andrade-Cetto, and J. Solá Ortega, “High-speed event camera tracking,” in British Mach. Vis. Conf. (BMVC), 2020.
  10. J. Jiao, H. Huang, L. Li, Z. He, Y. Zhu, and M. Liu, “Comparing representations in tracking for event camera-based slam,” in IEEE Conf. Comput. Vis. Pattern Recog. Workshops (CVPRW), 2021, pp. 1369–1376.
  11. D. Weikersdorfer, R. Hoffmann, and J. Conradt, “Simultaneous localization and mapping for event-based vision systems,” in Int. Conf. Comput. Vis. Syst. (ICVS), 2013, pp. 133–142.
  12. H. Kim, A. Handa, R. Benosman, S.-H. Ieng, and A. J. Davison, “Simultaneous mosaicing and tracking with an event camera,” in British Mach. Vis. Conf. (BMVC), 2014.
  13. H. Kim, S. Leutenegger, and A. J. Davison, “Real-time 3D reconstruction and 6-DoF tracking with an event camera,” in Eur. Conf. Comput. Vis. (ECCV), 2016, pp. 349–364.
  14. H. Rebecq, T. Horstschäfer, G. Gallego, and D. Scaramuzza, “EVO: A geometric approach to event-based 6-DOF parallel tracking and mapping in real-time,” IEEE Robot. Autom. Lett., vol. 2, no. 2, pp. 593–600, 2017.
  15. C. Reinbacher, G. Munda, and T. Pock, “Real-time panoramic tracking for event cameras,” in IEEE Int. Conf. Comput. Photography (ICCP), 2017, pp. 1–9.
  16. A. Z. Zhu, N. Atanasov, and K. Daniilidis, “Event-based visual inertial odometry,” in IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2017, pp. 5816–5824.
  17. E. Mueggler, G. Gallego, H. Rebecq, and D. Scaramuzza, “Continuous-time visual-inertial odometry for event cameras,” IEEE Trans. Robot., vol. 34, no. 6, pp. 1425–1440, Dec. 2018.
  18. A. Rosinol Vidal, H. Rebecq, T. Horstschaefer, and D. Scaramuzza, “Ultimate SLAM? combining events, images, and IMU for robust visual SLAM in HDR and high speed scenarios,” IEEE Robot. Autom. Lett., vol. 3, no. 2, pp. 994–1001, Apr. 2018.
  19. Y. Zhou, G. Gallego, and S. Shen, “Event-based stereo visual odometry,” IEEE Trans. Robot., vol. 37, no. 5, pp. 1433–1450, 2021.
  20. W. Guan and P. Lu, “Monocular event visual inertial odometry based on event-corner using sliding windows graph-based optimization,” in IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS), 2022, pp. 2438–2445.
  21. W. Chamorro, J. Solá, and J. Andrade-Cetto, “Event-based line slam in real-time,” IEEE Robot. Autom. Lett., vol. 7, no. 3, pp. 8146–8153, 2022.
  22. H. Kim and H. J. Kim, “Real-time rotational motion estimation with contrast maximization over globally aligned events,” IEEE Robot. Autom. Lett., vol. 6, no. 3, pp. 6016–6023, 2021.
  23. C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. D. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Trans. Robot., vol. 32, no. 6, pp. 1309–1332, 2016.
  24. G. Gallego, H. Rebecq, and D. Scaramuzza, “A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation,” in IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2018, pp. 3867–3876.
  25. G. Gallego, M. Gehrig, and D. Scaramuzza, “Focus is all you need: Loss functions for event-based vision,” in IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2019, pp. 12 272–12 281.
  26. T. Stoffregen, G. Gallego, T. Drummond, L. Kleeman, and D. Scaramuzza, “Event-based motion segmentation by motion compensation,” in Int. Conf. Comput. Vis. (ICCV), 2019, pp. 7243–7252.
  27. D. Liu, A. Parra, and T.-J. Chin, “Globally optimal contrast maximisation for event-based motion estimation,” in IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2020, pp. 6348–6357.
  28. U. M. Nunes and Y. Demiris, “Entropy minimisation framework for event-based vision model estimation,” in Eur. Conf. Comput. Vis. (ECCV), 2020, pp. 161–176.
  29. X. Peng, Y. Wang, L. Gao, and L. Kneip, “Globally-optimal event camera motion estimation,” in Eur. Conf. Comput. Vis. (ECCV), 2020, pp. 51–67.
  30. C. Gu, E. Learned-Miller, D. Sheldon, G. Gallego, and P. Bideau, “The spatio-temporal Poisson point process: A simple model for the alignment of event camera data,” in Int. Conf. Comput. Vis. (ICCV), 2021, pp. 13 495–13 504.
  31. U. M. Nunes and Y. Demiris, “Robust event-based vision model estimation by dispersion minimisation,” IEEE Trans. Pattern Anal. Mach. Intell., 2021.
  32. Y. Zhou, G. Gallego, X. Lu, S. Liu, and S. Shen, “Event-based motion segmentation with spatio-temporal graph cuts,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–13, 2021.
  33. X. Peng, L. Gao, Y. Wang, and L. Kneip, “Globally-optimal contrast maximisation for event cameras,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 7, pp. 3479–3495, 2022.
  34. S. Shiba, Y. Aoki, and G. Gallego, “Secrets of event-based optical flow,” in Eur. Conf. Comput. Vis. (ECCV), 2022, pp. 628–645.
  35. S. Ghosh and G. Gallego, “Multi-event-camera depth estimation and outlier rejection by refocused events fusion,” Adv. Intell. Syst., vol. 4, no. 12, p. 2200221, 2022.
  36. S. McLeod, G. Meoni, D. Izzo, A. Mergy, D. Liu, Y. Latif, I. Reid, and T.-J. Chin, “Globally optimal event-based divergence estimation for ventral landing,” in Eur. Conf. Comput. Vis. Workshops (ECCVW), 2022.
  37. S. Shiba, Y. Aoki, and G. Gallego, “A fast geometric regularizer to mitigate event collapse in the contrast maximization framework,” Adv. Intell. Syst., p. 2200251, 2022.
  38. G. Klein and D. Murray, “Parallel tracking and mapping for small AR workspaces,” in IEEE ACM Int. Sym. Mixed and Augmented Reality (ISMAR), Nara, Japan, Nov. 2007, pp. 225–234.
  39. T. Stoffregen and L. Kleeman, “Event cameras, contrast maximization and reward functions: an analysis,” in IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2019, pp. 12 292–12 300.
  40. Z. Zhang, A. Yezzi, and G. Gallego, “Formulating event-based image reconstruction as a linear inverse problem with deep regularization using optical flow,” IEEE Trans. Pattern Anal. Mach. Intell., 2022.
  41. J. Engel, V. Koltun, and D. Cremers, “Direct Sparse Odometry,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 3, pp. 611–625, Mar. 2018.
  42. T. Stoffregen, C. Scheerlinck, D. Scaramuzza, T. Drummond, N. Barnes, L. Kleeman, and R. Mahony, “Reducing the sim-to-real gap for event cameras,” in Eur. Conf. Comput. Vis. (ECCV), 2020, pp. 534–549.
  43. S. Shiba, Y. Aoki, and G. Gallego, “Event collapse in contrast maximization frameworks,” Sensors, vol. 22, no. 14, pp. 1–20, 2022.
  44. B. Triggs, P. McLauchlan, R. Hartley, and A. Fitzgibbon, “Bundle adjustment – a modern synthesis,” in Vision Algorithms: Theory and Practice, ser. LNCS, W. Triggs, A. Zisserman, and R. Szeliski, Eds., vol. 1883.   Springer Berlin Heidelberg, 2000, pp. 298–372.
  45. A. Patron-Perez, S. Lovegrove, and G. Sibley, “A spline-based trajectory representation for sensor fusion and rolling shutter cameras,” Int. J. Comput. Vis., vol. 113, no. 3, pp. 208–219, 2015.
  46. C. Sommer, V. Usenko, D. Schubert, N. Demmel, and D. Cremers, “Efficient derivative computation for cumulative B-splines on Lie groups,” in IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2020, pp. 11 145–11 153.
  47. H. Rebecq, G. Gallego, E. Mueggler, and D. Scaramuzza, “EMVS: Event-based multi-view stereo—3D reconstruction with an event camera in real-time,” Int. J. Comput. Vis., vol. 126, no. 12, pp. 1394–1414, Dec. 2018.
  48. M. Liu and T. Delbruck, “Adaptive time-slice block-matching optical flow algorithm for dynamic vision sensors,” in British Mach. Vis. Conf. (BMVC), 2018, pp. 1–12.
  49. C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-manifold preintegration for real-time visual-inertial odometry,” IEEE Trans. Robot., vol. 33, no. 1, pp. 1–21, 2017.
  50. C.-K. Shene, “Curve global interpolation,” Introduction to Computing with Geometry Notes, 2014, Michigan Technological University.
  51. E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, and D. Scaramuzza, “The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM,” Int. J. Robot. Research, vol. 36, no. 2, pp. 142–149, 2017.
  52. H. Rebecq, D. Gehrig, and D. Scaramuzza, “ESIM: an open event camera simulator,” in Conf. on Robotics Learning (CoRL), ser. Proc. Machine Learning Research, vol. 87.   PMLR, 2018, pp. 969–982.
  53. J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of RGB-D SLAM systems,” in IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS), Oct. 2012.
  54. Z. Zhang and D. Scaramuzza, “A tutorial on quantitative trajectory evaluation for visual(-inertial) odometry,” in IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS), 2018.
  55. H. Kim, “Real-time visual SLAM with an event camera,” Ph.D. dissertation, Imperial College London, 2018.
  56. E. Mueggler, C. Bartolozzi, and D. Scaramuzza, “Fast event-based corner detection,” in British Mach. Vis. Conf. (BMVC), 2017.
  57. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett., vol. 20, no. 3, pp. 209–212, Mar. 2013.
  58. N. Venkatanath, D. Praneeth, M. Chandrasekhar Bh, S. S. Channappayya, and S. S. Medasani, “Blind image quality evaluation using perception based features,” in 2015 Twenty First National Conference on Communications (NCC), 2015, pp. 1–6.
  59. G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik, “Event-based sensing for space situational awareness,” J. Astronaut. Sci., vol. 66, no. 2, pp. 125–141, 2019.
  60. S. Afshar, A. P. Nicholson, A. van Schaik, and G. Cohen, “Event-based object detection and tracking for space situational awareness,” IEEE Sensors Journal, vol. 20, no. 24, pp. 15 117–15 132, 2020.
  61. M. G. McHarg, R. L. Balthazor, B. J. McReynolds, D. H. Howe, C. J. Maloney, D. O’Keefe, R. Bam, G. Wilson, P. Karki, A. Marcireau, and G. Cohen, “Falcon Neuro: an event-based sensor on the international space station,” Optical Engineering, vol. 61, no. 8, p. 085105, 2022.
  62. T.-J. Chin, S. Bagchi, A. P. Eriksson, and A. van Schaik, “Star tracking using an event camera,” in IEEE Conf. Comput. Vis. Pattern Recog. Workshops (CVPRW), 2019, pp. 1646–1655.
  63. S. Bagchi and T.-J. Chin, “Event-based star tracking via multiresolution progressive hough transforms,” in IEEE Winter Conf. Appl. Comput. Vis. (WACV), 2020, pp. 2132–2141.
  64. R. Hartley, J. Trumpf, Y. Dai, and H. Li, “Rotation averaging,” Int. J. Comput. Vis., vol. 103, no. 3, pp. 267–305, 2013.
  65. G. Gallego, “Variational image processing algorithms for the stereoscopic space-time reconstruction of water waves,” Ph.D. dissertation, Georgia Institute of Technology, 2011.
  66. G. Gallego, C. Forster, E. Mueggler, and D. Scaramuzza, “Event-based camera pose tracking using a generative event model,” 2015, arXiv:1510.01972.
  67. C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240x180 130dB 3μ𝜇\muitalic_μs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits, vol. 49, no. 10, pp. 2333–2341, 2014.
Citations (16)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com