Papers
Topics
Authors
Recent
Search
2000 character limit reached

Gradient events: improved acquisition of visual information in event cameras

Published 3 Sep 2024 in cs.CV | (2409.01764v1)

Abstract: The current event cameras are bio-inspired sensors that respond to brightness changes in the scene asynchronously and independently for every pixel, and transmit these changes as ternary event streams. Event cameras have several benefits over conventional digital cameras, such as significantly higher temporal resolution and pixel bandwidth resulting in reduced motion blur, and very high dynamic range. However, they also introduce challenges such as the difficulty of applying existing computer vision algorithms to the output event streams, and the flood of uninformative events in the presence of oscillating light sources. Here we propose a new type of event, the gradient event, which benefits from the same properties as a conventional brightness event, but which is by design much less sensitive to oscillating light sources, and which enables considerably better grayscale frame reconstruction. We show that the gradient event -based video reconstruction outperforms existing state-of-the-art brightness event -based methods by a significant margin, when evaluated on publicly available event-to-video datasets. Our results show how gradient information can be used to significantly improve the acquisition of visual information by an event camera.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. M. Mahowald and C. Mead, “The silicon retina,” Scientific American, vol. 264, no. 5, pp. 78–83, 1991.
  2. K. Zaghloul and K. Boahen, “A silicon retina that reproduces signals in the optic nerve,” Journal of neural engineering, vol. 3, pp. 257–67, 01 2007.
  3. P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128 ×\times× 128 120 db 15 μ𝜇\muitalic_μs latency asynchronous temporal contrast vision sensor,” IEEE Journal of Solid-State Circuits, vol. 43, no. 2, pp. 566–576, 2008.
  4. C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240 ×\times× 180 130 dB 3μ𝜇\muitalic_μs latency global shutter spatiotemporal vision sensor,” IEEE Journal of Solid-State Circuits, vol. 49, no. 10, pp. 2333–2341, 2014.
  5. C. Posch, D. Matolin, and R. Wohlgenannt, “A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS,” IEEE Journal of Solid-State Circuits, vol. 46, no. 1, pp. 259–275, 2011.
  6. G. Gallego et al., “Event-based vision: A survey,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 44, no. 01, pp. 154–180, jan 2022.
  7. T. Delbruck, C. Li, R. Graca, and B. McReynolds, “Utility and feasibility of a center surround event camera,” in 2022 IEEE International Conference on Image Processing (ICIP), 2022, pp. 381–385.
  8. J. Tumblin, A. Agrawal, and R. Raskar, “Why I want a gradient camera,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, 2005, pp. 103–110 vol. 1.
  9. O. Gallo, I. Frosio, L. Gasparini, K. Pulli, and M. Gottardi, “Retrieving gray-level information from a binary sensor and its application to gesture detection,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2015, pp. 21–26.
  10. S. Jayasuriya, O. Gallo, J. Gu, T. Aila, and J. Kautz, “Reconstructing intensity images from binary spatial gradient cameras,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 337–343.
  11. H. Rebecq, R. Ranftl, V. Koltun, and D. Scaramuzza, “High speed and high dynamic range video with an event camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 6, pp. 1964–1980, 2021.
  12. C. Scheerlinck et al., “Fast image reconstruction with an event camera,” in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 156–163.
  13. T. Stoffregen et al., “Reducing the sim-to-real gap for event cameras,” in Computer Vision – ECCV 2020, 2020, pp. 534–549.
  14. P. R. G. Cadena, Y. Qian, C. Wang, and M. Yang, “SPADE-E2VID: Spatially-adaptive denormalization for event-based video reconstruction,” IEEE Transactions on Image Processing, vol. 30, pp. 2488–2500, 2021.
  15. F. Paredes-Vallés and G. C. H. E. de Croon, “Back to event basics: Self-supervised learning of image reconstruction for event cameras via photometric constancy,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 3446–3455.
  16. W. Weng, Y. Zhang, and Z. Xiong, “Event-based video reconstruction using transformer,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 2543–2552.
  17. B. Ercan, O. Eker, C. Saglam, A. Erdem, and E. Erdem, “HyperE2VID: Improving event-based video reconstruction via hypernetworks,” IEEE Transactions on Image Processing, vol. 33, pp. 1826–1837, 2024.
  18. D. Young, “Iterative methods for solving partial difference equations of elliptic type,” Transactions of the American Mathematical Society, vol. 76, no. 1, pp. 92–111, 1954.
  19. B. Ercan, O. Eker, A. Erdem, and E. Erdem, “EVREAL: Towards a comprehensive benchmark and analysis suite for event-based video reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2023, pp. 3942–3951.
  20. ——, “EVREAL: Towards a comprehensive benchmark and analysis suite for event-based video reconstruction,” https://github.com/ercanburak/EVREAL, accessed: 2023-08-31.
  21. E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, and D. Scaramuzza, “The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM,” The International Journal of Robotics Research, vol. 36, no. 2, pp. 142–149, 2017.
  22. A. Zihao Zhu et al., “The multi vehicle stereo event camera dataset: An event camera dataset for 3D perception,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2032–2039, 2018.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.