Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Non-Uniform Exposure Imaging via Neuromorphic Shutter Control (2404.13972v1)

Published 22 Apr 2024 in cs.CV

Abstract: By leveraging the blur-noise trade-off, imaging with non-uniform exposures largely extends the image acquisition flexibility in harsh environments. However, the limitation of conventional cameras in perceiving intra-frame dynamic information prevents existing methods from being implemented in the real-world frame acquisition for real-time adaptive camera shutter control. To address this challenge, we propose a novel Neuromorphic Shutter Control (NSC) system to avoid motion blurs and alleviate instant noises, where the extremely low latency of events is leveraged to monitor the real-time motion and facilitate the scene-adaptive exposure. Furthermore, to stabilize the inconsistent Signal-to-Noise Ratio (SNR) caused by the non-uniform exposure times, we propose an event-based image denoising network within a self-supervised learning paradigm, i.e., SEID, exploring the statistics of image noises and inter-frame motion information of events to obtain artificial supervision signals for high-quality imaging in real-world scenes. To illustrate the effectiveness of the proposed NSC, we implement it in hardware by building a hybrid-camera imaging prototype system, with which we collect a real-world dataset containing well-synchronized frames and events in diverse scenarios with different target scenes and motion patterns. Experiments on the synthetic and real-world datasets demonstrate the superiority of our method over state-of-the-art approaches.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (57)
  1. M. Jin, G. Meishvili, and P. Favaro, “Learning to extract a video sequence from a single motion-blurred image,” in CVPR, 2018, pp. 6334–6342.
  2. S.-J. Cho, S.-W. Ji, J.-P. Hong, S.-W. Jung, and S.-J. Ko, “Rethinking coarse-to-fine approach in single image deblurring,” in ICCV, 2021, pp. 4641–4650.
  3. Y. Zhang, C. Wang, S. J. Maybank, and D. Tao, “Exposure trajectory recovery from motion blur,” IEEE TPAMI, vol. 44, no. 11, pp. 7490–7504, 2021.
  4. C. Song, Q. Huang, and C. Bajaj, “E-cir: Event-enhanced continuous intensity recovery,” in CVPR, 2022, pp. 7803–7812.
  5. C. Zhou, M. Teng, J. Han, J. Liang, C. Xu, G. Cao, and B. Shi, “Deblurring low-light images with events,” IJCV, vol. 131, no. 5, pp. 1284–1298, 2023.
  6. B. Mildenhall, J. T. Barron, J. Chen, D. Sharlet, R. Ng, and R. Carroll, “Burst denoising with kernel prediction networks,” in CVPR, 2018, pp. 2502–2510.
  7. J. Mustaniemi, J. Kannala, J. Matas, S. Särkkä, and J. Heikkilä, “Lsd _⁢2_2\_2_ 2–joint denoising and deblurring of short and long exposure images with cnns,” arXiv preprint arXiv:1811.09485, 2018.
  8. Z. Xia, F. Perazzi, M. Gharbi, K. Sunkavalli, and A. Chakrabarti, “Basis prediction networks for effective burst denoising with large kernels,” in CVPR, 2020, pp. 11 844–11 853.
  9. S. Izadi, D. Sutton, and G. Hamarneh, “Image denoising in the deep learning era,” Artificial Intelligence Review, vol. 56, no. 7, pp. 5929–5974, 2023.
  10. O. Dahary, M. Jacoby, and A. M. Bronstein, “Digital gimbal: End-to-end deep image stabilization with learnable exposure times,” in CVPR, 2021, pp. 11 936–11 945.
  11. M. Chang, H. Feng, Z. Xu, and Q. Li, “Low-light image restoration with short-and long-exposure raw pairs,” IEEE TMM, vol. 24, pp. 702–714, 2021.
  12. Y. Zhao, Y. Xu, Q. Yan, D. Yang, X. Wang, and L.-M. Po, “D2hnet: Joint denoising and deblurring with hierarchical network for robust night image restoration,” in ECCV, 2022, pp. 91–110.
  13. Z. Zhang, K. Dong, J. Suo, and Q. Dai, “Deep coded exposure: end-to-end co-optimization of flutter shutter and deblurring processing for general motion blur removal,” Photonics Research, vol. 11, no. 10, pp. 1678–1686, 2023.
  14. B. Han, Y. Lin, Y. Dong, H. Wang, T. Zhang, and C. Liang, “Camera attributes control for visual odometry with motion blur awareness,” IEEE/ASME Transactions on Mechatronics, 2023.
  15. P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×\times×128 120 dB 15 μ𝜇\muitalic_μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits, vol. 43, no. 2, pp. 566–576, 2008.
  16. G. Gallego, T. Delbrück, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. J. Davison, J. Conradt, K. Daniilidis et al., “Event-based vision: A survey,” IEEE TPAMI, vol. 44, no. 1, pp. 154–180, 2020.
  17. C. Zhang, S. Su, Y. Zhu, Q. Yan, J. Sun, and Y. Zhang, “Exploring and evaluating image restoration potential in dynamic scenes,” in CVPR, 2022, pp. 2067–2076.
  18. N. Barakat, T. E. Darcie, and A. N. Hone, “The tradeoff between snr and exposure-set size in hdr imaging,” in ICIP, 2008, pp. 1848–1851.
  19. M. Tassano, J. Delon, and T. Veit, “Fastdvdnet: Towards real-time deep video denoising without flow estimation,” in CVPR, 2020, pp. 1354–1363.
  20. G. Vaksman, M. Elad, and P. Milanfar, “Patch craft: Video denoising by deep modeling and patch matching,” in ICCV, 2021, pp. 2157–2166.
  21. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in CVPR, 2021, pp. 14 821–14 831.
  22. G. Vaksman and M. Elad, “Patch-craft self-supervised training for correlated image denoising,” in CVPR, 2023, pp. 5795–5804.
  23. S. Tulyakov, D. Gehrig, S. Georgoulis, J. Erbach, M. Gehrig, Y. Li, and D. Scaramuzza, “Time lens: Event-based video frame interpolation,” in CVPR, 2021, pp. 16 155–16 164.
  24. J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” in ICML, 2018, pp. 2965–2974.
  25. A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void-learning denoising from single noisy images,” in CVPR, 2019, pp. 2129–2137.
  26. S. Laine, T. Karras, J. Lehtinen, and T. Aila, “High-quality self-supervised deep image denoising,” NeurIPS, vol. 32, 2019.
  27. T. Huang, S. Li, X. Jia, H. Lu, and J. Liu, “Neighbor2neighbor: Self-supervised denoising from single noisy images,” in CVPR, 2021, pp. 14 781–14 790.
  28. W. Lee, S. Son, and K. M. Lee, “Ap-bsn: Self-supervised denoising for real-world images via asymmetric pd and blind-spot network,” in CVPR, 2022, pp. 17 725–17 734.
  29. J. Li, Z. Zhang, X. Liu, C. Feng, X. Wang, L. Lei, and W. Zuo, “Spatially adaptive self-supervised learning for real-world image denoising,” in CVPR, 2023, pp. 9914–9924.
  30. L. Pan, C. Scheerlinck, X. Yu, R. Hartley, M. Liu, and Y. Dai, “Bringing a blurry frame alive at high frame-rate with an event camera,” in CVPR, 2019, pp. 6820–6829.
  31. B. Wang, J. He, L. Yu, G.-S. Xia, and W. Yang, “Event enhanced high-quality image recovery,” in ECCV, 2020, pp. 155–171.
  32. F. Xu, L. Yu, B. Wang, W. Yang, G.-S. Xia, X. Jia, Z. Qiao, and J. Liu, “Motion deblurring with real events,” in ICCV, 2021, pp. 2583–2592.
  33. L. Sun, C. Sakaridis, J. Liang, Q. Jiang, K. Yang, P. Sun, Y. Ye, K. Wang, and L. V. Gool, “Event-based fusion for motion deblurring with cross-modal attention,” in ECCV, 2022, pp. 412–428.
  34. X. Zhang, L. Yu, W. Yang, J. Liu, and G.-S. Xia, “Generalizing event-based motion deblurring in real-world scenarios,” in CVPR, 2023, pp. 10 734–10 744.
  35. S. Lin, J. Zhang, J. Pan, Z. Jiang, D. Zou, Y. Wang, J. Chen, and J. Ren, “Learning event-driven video deblurring and interpolation,” in ECCV, 2020, pp. 695–710.
  36. S. Tulyakov, A. Bochicchio, D. Gehrig, S. Georgoulis, Y. Li, and D. Scaramuzza, “Time lens++: Event-based frame interpolation with parametric non-linear flow and multi-scale fusion,” in CVPR, 2022, pp. 17 755–17 764.
  37. X. Zhang and L. Yu, “Unifying motion deblurring and frame interpolation with events,” in CVPR, 2022, pp. 17 765–17 774.
  38. G. E. Healey and R. Kondepudy, “Radiometric ccd camera calibration and noise estimation,” IEEE TPAMI, vol. 16, no. 3, pp. 267–276, 1994.
  39. B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, no. 1, pp. 185–203, 1981.
  40. M. Gehrig, M. Millhäusler, D. Gehrig, and D. Scaramuzza, “E-raft: Dense optical flow from event cameras,” in 3DV, 2021, pp. 197–206.
  41. V. Brebion, J. Moreau, and F. Davoine, “Real-time optical flow for vehicular perception with low-and high-resolution event cameras,” IEEE TITS, vol. 23, no. 9, pp. 15 066–15 078, 2021.
  42. S. Shiba, Y. Aoki, and G. Gallego, “Secrets of event-based optical flow,” in ECCV, 2022, pp. 628–645.
  43. Z. Zhang, A. Yezzi, and G. Gallego, “Formulating event-based image reconstruction as a linear inverse problem with deep regularization using optical flow,” IEEE TPAMI, vol. 45, no. 7, pp. 8372–8389, 2022.
  44. X. Zhu, H. Hu, S. Lin, and J. Dai, “Deformable convnets v2: More deformable, better results,” in CVPR, 2019, pp. 9308–9316.
  45. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” IJCV, vol. 60, pp. 91–110, 2004.
  46. H. Rebecq, R. Ranftl, V. Koltun, and D. Scaramuzza, “Events-to-video: Bringing modern computer vision to event cameras,” in CVPR, 2019, pp. 3857–3866.
  47. Y. Wang, H. Chen, S. Zhang, and W. Lu, “Automated camera-exposure control for robust localization in varying illumination environments,” Autonomous Robots, vol. 46, no. 4, pp. 515–534, 2022.
  48. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in ACM SIGGRAPH 2008 classes, 2008, pp. 1–10.
  49. T. Xue, B. Chen, J. Wu, D. Wei, and W. T. Freeman, “Video enhancement with task-oriented flow,” IJCV, vol. 127, pp. 1106–1125, 2019.
  50. Z. Huang, T. Zhang, W. Heng, B. Shi, and S. Zhou, “Real-time intermediate flow estimation for video frame interpolation,” in ECCV, 2022, pp. 624–642.
  51. S. Lin, Y. Ma, Z. Guo, and B. Wen, “Dvs-voltmeter: Stochastic process-based event simulator for dynamic vision sensors,” in ECCV, 2022, pp. 578–593.
  52. F. Zhang, Y. Li, S. You, and Y. Fu, “Learning temporal consistency for low light video enhancement from single images,” in CVPR, 2021, pp. 4967–4976.
  53. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE TIP, vol. 13, no. 4, pp. 600–612, 2004.
  54. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in IJCAI, vol. 2, 1981, pp. 674–679.
  55. C. Tomasi and T. Kanade, “Detection and tracking of point,” IJCV, vol. 9, no. 137-154, p. 2, 1991.
  56. A. iniVation, “Understanding the performance of neuromorphic event-based vision sensors,” Tech. Rep., 2020.
  57. Z. Wang, D. Yuan, Y. Ng, and R. Mahony, “A linear comb filter for event flicker removal,” in ICRA, 2022, pp. 398–404.

Summary

We haven't generated a summary for this paper yet.