Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Low-Trace Adaptation of Zero-shot Self-supervised Blind Image Denoising (2403.12382v1)

Published 19 Mar 2024 in eess.IV, cs.CV, and cs.LG

Abstract: Deep learning-based denoiser has been the focus of recent development on image denoising. In the past few years, there has been increasing interest in developing self-supervised denoising networks that only require noisy images, without the need for clean ground truth for training. However, a performance gap remains between current self-supervised methods and their supervised counterparts. Additionally, these methods commonly depend on assumptions about noise characteristics, thereby constraining their applicability in real-world scenarios. Inspired by the properties of the Frobenius norm expansion, we discover that incorporating a trace term reduces the optimization goal disparity between self-supervised and supervised methods, thereby enhancing the performance of self-supervised learning. To exploit this insight, we propose a trace-constraint loss function and design the low-trace adaptation Noise2Noise (LoTA-N2N) model that bridges the gap between self-supervised and supervised learning. Furthermore, we have discovered that several existing self-supervised denoising frameworks naturally fall within the proposed trace-constraint loss as subcases. Extensive experiments conducted on natural and confocal image datasets indicate that our method achieves state-of-the-art performance within the realm of zero-shot self-supervised image denoising approaches, without relying on any assumptions regarding the noise.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” 2015.
  2. B. Xia, Y. Zhang, S. Wang, Y. Wang, X. Wu, Y. Tian, W. Yang, and L. Van Gool, “Diffir: Efficient diffusion model for image restoration,” ICCV, 2023.
  3. B. Xia, Y. Zhang, Y. Wang, Y. Tian, W. Yang, R. Timofte, and L. Van Gool, “Knowledge distillation based degradation estimation for blind super-resolution,” ICLR, 2023.
  4. W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” 2016.
  5. B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” 2017.
  6. C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” 2017.
  7. X. Wang, L. Xie, C. Dong, and Y. Shan, “Real-esrgan: Training real-world blind super-resolution with pure synthetic data,” 2021.
  8. W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep joint rain detection and removal from a single image,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1685–1694.
  9. H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” 2018.
  10. J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” 2018.
  11. T. Huang, S. Li, X. Jia, H. Lu, and J. Liu, “Neighbor2neighbor: Self-supervised denoising from single noisy images,” 2021.
  12. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187–5198, 2016.
  13. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, p. 3142–3155, Jul. 2017. [Online]. Available: http://dx.doi.org/10.1109/TIP.2017.2662206
  14. K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn-based image denoising,” IEEE Transactions on Image Processing, vol. 27, no. 9, p. 4608–4622, Sep. 2018. [Online]. Available: http://dx.doi.org/10.1109/TIP.2018.2839891
  15. Y. Niu, Y. Yang, W. Guo, and L. Lin, “Region-aware image denoising by exploring parameter preference,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 9, pp. 2433–2438, 2018.
  16. H. Wang, Y. Li, Y. Cen, and Z. He, “Multi-matrices low-rank decomposition with structural smoothness for image denoising,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 2, pp. 349–361, 2020.
  17. B. Park, S. Yu, and J. Jeong, “Densely connected hierarchical network for image denoising,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 2104–2113.
  18. Y. Kim, J. W. Soh, G. Y. Park, and N. I. Cho, “Transfer learning from synthetic to real-noise denoising with adaptive instance normalization,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 3479–3489.
  19. S. Parameswaran, E. Luo, and T. Q. Nguyen, “Patch matching for image denoising using neighborhood-based collaborative filtering,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 2, pp. 392–401, 2018.
  20. B. Jiang, Y. Lu, J. Wang, G. Lu, and D. Zhang, “Deep image denoising with adaptive priors,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 8, pp. 5124–5136, 2022.
  21. S. Anwar and N. Barnes, “Real image denoising with feature attention,” 2020.
  22. S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” 2019.
  23. A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void - learning denoising from single noisy images,” 2019.
  24. X. Wu, M. Liu, Y. Cao, D. Ren, and W. Zuo, “Unpaired learning of deep image denoising,” 2020.
  25. J. Xu, Y. Huang, M.-M. Cheng, L. Liu, F. Zhu, Z. Xu, and L. Shao, “Noisy-as-clean: Learning self-supervised denoising from corrupted image,” IEEE Transactions on Image Processing, vol. 29, p. 9316–9329, 2020. [Online]. Available: http://dx.doi.org/10.1109/TIP.2020.3026622
  26. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” International Journal of Computer Vision, vol. 128, no. 7, p. 1867–1888, Mar. 2020. [Online]. Available: http://dx.doi.org/10.1007/s11263-020-01303-4
  27. W. Lee, S. Son, and K. M. Lee, “Ap-bsn: Self-supervised denoising for real-world images via asymmetric pd and blind-spot network,” 2022.
  28. W. Xu, X. Chen, H. Guo, X. Huang, and W. Liu, “Unsupervised image restoration with quality-task-perception loss,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 9, pp. 5736–5747, 2022.
  29. R. Neshatavar, M. Yavartanoo, S. Son, and K. M. Lee, “Cvf-sid: Cyclic multi-variate function for self-supervised image denoising by disentangling noise from image,” 2022.
  30. Q. Ning, W. Dong, X. Li, and J. Wu, “Searching efficient model-guided deep network for image denoising,” IEEE Transactions on Image Processing, vol. 32, pp. 668–681, 2023.
  31. Jubyrea, S. Kotal, A. M. S. Showrav, B. Ryu, and M. T. B. Iqbal, “Efficient self-supervised denoising from single image,” in 2022 12th International Conference on Electrical and Computer Engineering (ICECE), 2022, pp. 140–143.
  32. J. Guan, R. Lai, Y. Lu, Y. Li, H. Li, L. Feng, Y. Yang, and L. Gu, “Memory-efficient deformable convolution based joint denoising and demosaicing for uhd images,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 11, pp. 7346–7358, 2022.
  33. Y. Mansour and R. Heckel, “Zero-shot noise2noise: Efficient image denoising without any data,” 2023.
  34. B. Jiang, J. Wang, Y. Lu, G. Lu, and D. Zhang, “Multilevel noise contrastive network for few-shot image denoising,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–13, 2022.
  35. B. Jiang, Y. Lu, B. Zhang, and G. Lu, “Few-shot learning for image denoising,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 9, pp. 4741–4753, 2023.
  36. T. Pang, H. Zheng, Y. Quan, and H. Ji, “Recorrupted-to-recorrupted: Unsupervised deep learning for image denoising,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 2043–2052.
  37. N. Moran, D. Schmidt, Y. Zhong, and P. Coady, “Noisier2noise: Learning to denoise from unpaired noisy data,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 12 061–12 069.
  38. S. Soltanayev and S. Y. Chun, “Training deep learning based denoisers without ground truth data,” 2021.
  39. J. Batson and L. Royer, “Noise2self: Blind denoising by self-supervision,” 2019.
  40. Y. Zhang, D. Li, K. L. Law, X. Wang, H. Qin, and H. Li, “Idr: Self-supervised image denoising via iterative data refinement,” 2022.
  41. S. Laine, T. Karras, J. Lehtinen, and T. Aila, “High-quality self-supervised deep image denoising,” 2019.
  42. Z. Wang, J. Liu, G. Li, and H. Han, “Blind2unblind: Self-supervised image denoising with visible blind spots,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 2017–2026.
  43. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007.
  44. J. Lequyer, R. Philip, A. Sharma, W.-H. Hsu, and L. Pelletier, “A fast blind zero-shot denoiser,” Nature Machine Intelligence, vol. 4, no. 11, p. 953–963, Oct. 2022. [Online]. Available: http://dx.doi.org/10.1038/s42256-022-00547-8
  45. X. Wu, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” Journal of Electronic Imaging, vol. 20, p. 023016, 04 2011.
  46. R. Zeyde, M. Elad, and M. Protter, “On single image scale-up using sparse-representations,” in Curves and Surfaces, J.-D. Boissonnat, P. Chenin, A. Cohen, C. Gout, T. Lyche, M.-L. Mazure, and L. Schumaker, Eds.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 711–730.
  47. S. Roth and M. Black, “Fields of experts: a framework for learning image priors,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2, 2005, pp. 860–867 vol. 2.
  48. Y. Zhang, Y. Zhu, E. Nichols, Q. Wang, S. Zhang, C. Smith, and S. Howard, “A poisson-gaussian denoising dataset with real fluorescence microscopy images,” 2019.
  49. D. Kermany, K. Zhang, and M. Goldbaum, “Labeled optical coherence tomography (oct) and chest x-ray images for classification,” Mendeley Data, 2019.
  50. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.

Summary

We haven't generated a summary for this paper yet.