Papers
Topics
Authors
Recent
2000 character limit reached

Seeing Motion at Nighttime with an Event Camera

Published 18 Apr 2024 in cs.CV | (2404.11884v1)

Abstract: We focus on a very challenging task: imaging at nighttime dynamic scenes. Most previous methods rely on the low-light enhancement of a conventional RGB camera. However, they would inevitably face a dilemma between the long exposure time of nighttime and the motion blur of dynamic scenes. Event cameras react to dynamic changes with higher temporal resolution (microsecond) and higher dynamic range (120dB), offering an alternative solution. In this work, we present a novel nighttime dynamic imaging method with an event camera. Specifically, we discover that the event at nighttime exhibits temporal trailing characteristics and spatial non-stationary distribution. Consequently, we propose a nighttime event reconstruction network (NER-Net) which mainly includes a learnable event timestamps calibration module (LETC) to align the temporal trailing events and a non-uniform illumination aware module (NIAM) to stabilize the spatiotemporal distribution of events. Moreover, we construct a paired real low-light event dataset (RLED) through a co-axial imaging system, including 64,200 spatially and temporally aligned image GTs and low-light events. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art methods in terms of visual quality and generalization ability on real-world nighttime datasets. The project are available at: https://github.com/Liu-haoyue/NER-Net.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (76)
  1. Reducing the sim-to-real gap for event cameras. In Eur. Conf. Comput. Vis., pages 534–549, 2020.
  2. Llnet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 61:650–662, 2017.
  3. Learning to see in the dark. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3291–3300, 2018.
  4. Underexposed photo enhancement using deep illumination estimation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 6849–6857, 2019.
  5. Fast enhancement for non-uniform illumination images using light-weight cnns. In ACM Int. Conf. Multimedia, pages 1450–1458, 2020.
  6. Beyond brightening low-light images. Int. J. Comput. Vis., 129:1013–1037, 2021.
  7. S. Zheng and G. Gupta. Semantic-guided zero-shot learning for low-light image/video enhancement. In Winter Conf. Appl. Comput. Vis., pages 581–590, 2022.
  8. Seeing motion in the dark. In Int. Conf. Comput. Vis., pages 3185–3194, 2019.
  9. H. Jiang and Y. Zheng. Learning to see moving objects in the dark. In Int. Conf. Comput. Vis., pages 7324–7333, 2019.
  10. Low light video enhancement using synthetic data produced with an intermediate domain mapping. In Eur. Conf. Comput. Vis., pages 103–119, 2020.
  11. Learning temporal consistency for low light video enhancement from single images. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4967–4976, 2021.
  12. Seeing dynamic scene in the dark: A high-quality video dataset with mechatronic alignment. In Int. Conf. Comput. Vis., pages 9700–9709, 2021.
  13. Toward fast, flexible, and robust low-light image enhancement. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5637–5646, 2022.
  14. Snr-aware low-light image enhancement. In IEEE Conf. Comput. Vis. Pattern Recog., pages 17714–17724, 2022.
  15. A 128 ×\times×128 120 db 15 μ𝜇\muitalic_μs latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits, 43(2):566–576, 2008.
  16. A qvga 143 db dynamic range frame-free pwm image sensor with lossless pixel-level video compression and time-domain cds. IEEE J. Solid-State Circuits, 46(1):259–275, 2010.
  17. 5.10 a 1280×\times× 720 back-illuminated stacked temporal contrast event-based vision sensor with 4.86 μ𝜇\muitalic_μm pixels, 1.066 geps readout, programmable event-rate controller and compressive data-formatting pipeline. In IEEE Int. Solid-State Circuits Conf., pages 112–114, 2020.
  18. Utility and feasibility of a center surround event camera. In IEEE Int. Conf. Image Process., pages 381–385, 2022.
  19. Front and back illuminated dynamic and active pixel vision sensors comparison. IEEE Trans. Circuits Syst. II, 65(5):677–681, 2018.
  20. A 1280×\times× 960 dynamic vision sensor with a 4.95-μ𝜇\muitalic_μm pixel pitch and motion artifact minimization. In IEEE Int. Symp. Circuits Syst., pages 1–5, 2020.
  21. Interacting maps for fast visual interpretation. In Proc. Int. Jt. Conf. Neural Netw., pages 770–776, 2011.
  22. Simultaneous mosaicing and tracking with an event camera. IEEE J. Solid-State Circuits, 43:566–576, 2008.
  23. Simultaneous optical flow and intensity estimation from an event camera. In IEEE Conf. Comput. Vis. Pattern Recog., pages 884–892, 2016.
  24. Real-time intensity-image reconstruction for event cameras using manifold regularisation. Int. J. Comput. Vis., 126:1381–1393, 2018.
  25. Events-to-video: Bringing modern computer vision to event cameras. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3857–3866, 2019.
  26. Event-based high dynamic range image and very high frame rate video generation using conditional generative adversarial networks. In IEEE Conf. Comput. Vis. Pattern Recog., pages 10081–10090, 2019.
  27. Event-based video reconstruction using transformer. In Int. Conf. Comput. Vis., pages 2563–2572, 2021.
  28. Learning to reconstruct high speed and high dynamic range videos from events. In IEEE Conf. Comput. Vis. Pattern Recog., pages 2024–2033, 2021.
  29. Spade-e2vid: Spatially-adaptive denormalization for event-based video reconstruction. IEEE Trans. Image Process., 30:2488–2500, 2021.
  30. Event-based video reconstruction via potential-assisted spiking neural network. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3594–3604, 2022.
  31. S. Liu and P. Dragotti. Sensing diversity and sparsity models for event generation and video reconstruction from events. IEEE Trans. Pattern Anal. Mach. Intell., 2023.
  32. Eventsr: From asynchronous events to image reconstruction, restoration, and super-resolution via end-to-end adversarial learning. In IEEE Conf. Comput. Vis. Pattern Recog., pages 8315–8325, 2020.
  33. Learning to see in the dark with events. In Eur. Conf. Comput. Vis., pages 666–682, 2020.
  34. F. Paredes-Vallés and G. de Croon. Back to event basics: Self-supervised learning of image reconstruction for event cameras via photometric constancy. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3446–3455, 2021.
  35. Esim: an open event camera simulator. In IEEE Int. Conf. Robot., pages 969–982, 2018.
  36. v2e: From video frames to realistic dvs events. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1312–1321, 2021.
  37. Eventgan: Leveraging large scale image datasets for event cameras. In IEEE Int. Conf. Comput., pages 1–11, 2021.
  38. Dvs-voltmeter: Stochastic process-based event simulator for dynamic vision sensors. In Eur. Conf. Comput. Vis., pages 578–593, 2022.
  39. Neuromorphic camera guided high dynamic range imaging. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1730–1739, 2020.
  40. Time lens++: Event-based frame interpolation with parametric non-linear flow and multi-scale fusion. In IEEE Conf. Comput. Vis. Pattern Recog., pages 17755–17764, 2022.
  41. Exact histogram specification. IEEE Trans. Image Process., 15(5):1143–1152, 2006.
  42. H. Ibrahim and N. Kong. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum., 53(4):1752–1758, 2007.
  43. A histogram modification framework and its application for image contrast enhancement. IEEE Trans. Image Process., 18(9):1921–1935, 2009.
  44. T. Celik and T. Tjahjadi. Contextual and variational contrast enhancement. IEEE Trans. Image Process., 20(12):3431–3441, 2011.
  45. Properties and performance of a center/surround retinex. IEEE Trans. Image Process., 6(3):451–462, 1997.
  46. A weighted variational model for simultaneous reflectance and illumination estimation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 2782–2790, 2016.
  47. Lime: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process., 26(2):982–993, 2016.
  48. Adaptive unfolding total variation network for low-light image enhancement. In Int. Conf. Comput. Vis., pages 4439–4448, 2021.
  49. Nerf in the dark: High dynamic range view synthesis from noisy raw images. In IEEE Conf. Comput. Vis. Pattern Recog., pages 16190–16199, 2022.
  50. Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5901–5910, 2022.
  51. Deep learning thermal image translation for night vision perception. ACM Trans. Intell. Syst. Technol., 12(1):1–18, 2020.
  52. Thermal infrared image colorization for nighttime driving scenes with top-down guided attention. IEEE Trans. Intell. Transp. Syst., 23(9):15808–15823, 2022.
  53. Hybrid enhancement of infrared night vision imaging system. Multimed. Tools Appl., 79:6085–6108, 2020.
  54. Llvip: A visible-infrared paired dataset for low-light vision. In Int. Conf. Comput. Vis., pages 3496–3504, 2021.
  55. Low-light video enhancement with synthetic event guidance. In AAAI Conf. Artif. Intell., volume 37, pages 1692–1700, 2023.
  56. Coherent event guided low-light video enhancement. In Int. Conf. Comput. Vis., pages 10615–10625, 2023.
  57. Event-based low-illumination image enhancement. IEEE Trans. Multimedia, 2023.
  58. Event-guided attention network for low light image enhancement. In Proc. Int. Jt. Conf. Neural Netw., pages 1–8, 2023.
  59. Unsupervised event-based learning of optical flow, depth, and egomotion. In IEEE Conf. Comput. Vis. Pattern Recog., pages 989–997, 2019.
  60. End-to-end learning of representations for asynchronous event-based data. In Int. Conf. Comput. Vis., pages 5633–5643, 2019.
  61. B. Crawford. Visual adaptation in relation to brief conditioning stimuli. Proc. Royal Soc. B P Roy SocC B-Biol Sci, 134(875):283–302, 1947.
  62. Handbook of perception and human performance, volume 1. 1986.
  63. Gcnet: Non-local networks meet squeeze-excitation networks and beyond. In Int. Conf. Comput. Vis., pages 0–0, 2019.
  64. Convolutional lstm network: A machine learning approach for precipitation nowcasting. Adv. Neural Inform. Process. Syst., 28, 2015.
  65. Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms. Adv. Neural Inform. Process. Syst., 30, 2017.
  66. The unreasonable effectiveness of deep features as a perceptual metric. In IEEE Conf. Comput. Vis. Pattern Recog., pages 586–595, 2018.
  67. Learning blind video temporal consistency. In Eur. Conf. Comput. Vis., pages 170–185, 2018.
  68. Automatic differentiation in pytorch. 2017.
  69. D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  70. Dsec: A stereo event camera dataset for driving scenarios. IEEE Robot. Autom., 6(3):4947–4954, 2021.
  71. The multivehicle stereo event camera dataset: An event camera dataset for 3d perception. IEEE Robot. Autom., 3(3):2032–2039, 2018.
  72. Vector: A versatile event-centric benchmark for multi-sensor slam. IEEE Robot. Autom., 7(3):8217–8224, 2022.
  73. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process., 13(4):600–612, 2004.
  74. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process., 22(9):3538–3548, 2013.
  75. Making a “completely blind” image quality analyzer. IEEE Sign. Process. Letters, 20(3):209–212, 2012.
  76. Perceptual quality assessment of smartphone photography. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3677–3686, 2020.
Citations (6)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.