Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Event-Oriented Diffusion-Refinement Method for Sparse Events Completion (2401.03153v1)

Published 6 Jan 2024 in cs.CV

Abstract: Event cameras or dynamic vision sensors (DVS) record asynchronous response to brightness changes instead of conventional intensity frames, and feature ultra-high sensitivity at low bandwidth. The new mechanism demonstrates great advantages in challenging scenarios with fast motion and large dynamic range. However, the recorded events might be highly sparse due to either limited hardware bandwidth or extreme photon starvation in harsh environments. To unlock the full potential of event cameras, we propose an inventive event sequence completion approach conforming to the unique characteristics of event data in both the processing stage and the output form. Specifically, we treat event streams as 3D event clouds in the spatiotemporal domain, develop a diffusion-based generative model to generate dense clouds in a coarse-to-fine manner, and recover exact timestamps to maintain the temporal resolution of raw data successfully. To validate the effectiveness of our method comprehensively, we perform extensive experiments on three widely used public datasets with different spatial resolutions, and additionally collect a novel event dataset covering diverse scenarios with highly dynamic motions and under harsh illumination. Besides generating high-quality dense events, our method can benefit downstream applications such as object classification and intensity frame reconstruction.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Zhang, S.; Zhang, Y.; Jiang, Z.; Zou, D.; Ren, J.; Zhou, B. Learning to See in the Dark with Events. Computer Vision – ECCV 2020. Cham, 2020; pp 666–682
  2. Wang, L.;  , S. M. M. I.; Ho, Y.-S.; Yoon, K.-J. Event-Based High Dynamic Range Image and Very High Frame Rate Video Generation Using Conditional Generative Adversarial Networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019
  3. Duan, P.; Wang, Z. W.; Zhou, X.; Ma, Y.; Shi, B. EventZoom: Learning To Denoise and Super Resolve Neuromorphic Events. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021; pp 12824–12833
  4. Li, S.; Feng, Y.; Li, Y.; Jiang, Y.; Zou, C.; Gao, Y. Event Stream Super-Resolution via Spatiotemporal Constraint Learning. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021; pp 4480–4489
  5. Gao, Y.; Li, S.; Li, Y.; Guo, Y.; Dai, Q. SuperFast: 200×\bm{\times}bold_× Video Frame Interpolation via Event Camera. IEEE Transactions on Pattern Analysis and Machine Intelligence 2022, 1–17
  6. Lin, S.; Zhang, J.; Pan, J.; Jiang, Z.; Zou, D.; Wang, Y.; Chen, J.; Ren, J. Learning Event-Driven Video Deblurring and Interpolation. Computer Vision – ECCV 2020. Cham, 2020; pp 695–710
  7. Tulyakov, S.; Gehrig, D.; Georgoulis, S.; Erbach, J.; Gehrig, M.; Li, Y.; Scaramuzza, D. Time Lens: Event-Based Video Frame Interpolation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021; pp 16155–16164
  8.  , S. M. M. I.; Choi, J.; Yoon, K.-J. Learning to Super Resolve Intensity Images From Events. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020
  9. Han, J.; Yang, Y.; Zhou, C.; Xu, C.; Shi, B. EvIntSR-Net: Event Guided Multiple Latent Frames Reconstruction and Super-Resolution. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021; pp 4882–4891
  10. Li, Y.; Zhou, H.; Yang, B.; Zhang, Y.; Cui, Z.; Bao, H.; Zhang, G. Graph-Based Asynchronous Event Processing for Rapid Object Recognition. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021; pp 934–943
  11. Kim, J.; Bae, J.; Park, G.; Zhang, D.; Kim, Y. M. N-ImageNet: Towards Robust, Fine-Grained Object Recognition With Event Cameras. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021; pp 2146–2156
  12. Jia, L.; Feng, S.; Weiheng, L.; Dongqing, Z.; Qiang, W.; Paul-K.J., P.; Hyunsurk Eric, R. Adaptive Temporal Pooling for Object Detection using Dynamic Vision Sensor. Proceedings of the British Machine Vision Conference (BMVC). 2017; pp 40.1–40.12
  13. Cannici, M.; Ciccone, M.; Romanoni, A.; Matteucci, M. Asynchronous Convolutional Networks for Object Detection in Neuromorphic Cameras. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. 2019
  14. Rebecq, H.; Horstschaefer, T.; Scaramuzza, D. Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization. British Machine Vision Conference (BMVC). 2017
  15. Sironi, A.; Brambilla, M.; Bourdis, N.; Lagorce, X.; Benosman, R. HATS: Histograms of Averaged Time Surfaces for Robust Event-Based Object Classification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2018
  16. Gehrig, D.; Loquercio, A.; Derpanis, K. G.; Scaramuzza, D. End-to-End Learning of Representations for Asynchronous Event-Based Data. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2019
  17. Perot, E.; de Tournemire, P.; Nitti, D.; Masci, J.; Sironi, A. Learning to Detect Objects with a 1 Megapixel Event Camera. Advances in Neural Information Processing Systems. 2020; pp 16639–16652
  18. Zhu, A.; Yuan, L.; Chaney, K.; Daniilidis, K. EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras. Robotics: Science and Systems XIV. 2018
  19. Baldwin, R. W.; Almatrafi, M.; Asari, V.; Hirakawa, K. Event Probability Mask (EPM) and Event Denoising Convolutional Neural Network (EDnCNN) for Neuromorphic Cameras. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020
  20. Alkendi, Y.; Azzam, R.; Ayyad, A.; Javed, S.; Seneviratne, L.; Zweiri, Y. Neuromorphic Camera Denoising Using Graph Neural Network-Driven Transformers. IEEE Transactions on Neural Networks and Learning Systems 2022, 1–15
  21. Shrestha, S. B.; Orchard, G. SLAYER: Spike Layer Error Reassignment in Time. Advances in Neural Information Processing Systems. 2018
  22. Dai, A.; Ruizhongtai Qi, C.; Niessner, M. Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017
  23. Han, X.; Li, Z.; Huang, H.; Kalogerakis, E.; Yu, Y. High-Resolution Shape Completion Using Deep Neural Networks for Global Structure and Local Geometry Inference. Proceedings of the IEEE International Conference on Computer Vision (ICCV). 2017
  24. Valsesia, D.; Fracastoro, G.; Magli, E. Learning Localized Generative Models for 3D Point Clouds via Graph Convolution. International Conference on Learning Representations. 2019
  25. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; Solomon, J. M. Dynamic Graph CNN for Learning on Point Clouds. ACM Trans. Graph. 2019, 38
  26. Yu, X.; Rao, Y.; Wang, Z.; Liu, Z.; Lu, J.; Zhou, J. PoinTr: Diverse Point Cloud Completion With Geometry-Aware Transformers. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021; pp 12498–12507
  27. Achlioptas, P.; Diamanti, O.; Mitliagkas, I.; Guibas, L. Learning Representations and Generative Models for 3D Point Clouds. Proceedings of the 35th International Conference on Machine Learning. 2018; pp 40–49
  28. Pan, L.; Chen, X.; Cai, Z.; Zhang, J.; Zhao, H.; Yi, S.; Liu, Z. Variational Relational Point Completion Network. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021; pp 8524–8533
  29. Luo, S.; Hu, W. Diffusion Probabilistic Models for 3D Point Cloud Generation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021; pp 2837–2845
  30. Ho, J.; Jain, A.; Abbeel, P. Denoising Diffusion Probabilistic Models. Advances in Neural Information Processing Systems. 2020; pp 6840–6851
  31. Zhou, L.; Du, Y.; Wu, J. 3D Shape Generation and Completion Through Point-Voxel Diffusion. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021; pp 5826–5835
  32. Lyu, Z.; Kong, Z.; XU, X.; Pan, L.; Lin, D. A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion. International Conference on Learning Representations. 2022
  33. Qi, C. R.; Yi, L.; Su, H.; Guibas, L. J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. Advances in Neural Information Processing Systems. 2017
  34. Lu, C.; Zhou, Y.; Bao, F.; Chen, J.; Li, C.; Zhu, J. DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps. Advances in Neural Information Processing Systems. 2022
  35. Orchard, G.; Jayawant, A.; Cohen, G. K.; Thakor, N. Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades. Frontiers in Neuroscience 2015, 9
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Bo Zhang (633 papers)
  2. Yuqi Han (9 papers)
  3. Jinli Suo (40 papers)
  4. Qionghai Dai (66 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.