Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ShapeMoiré: Channel-Wise Shape-Guided Network for Image Demoiréing (2404.18155v2)

Published 28 Apr 2024 in cs.CV

Abstract: Photographing optoelectronic displays often introduces unwanted moir\'e patterns due to analog signal interference between the pixel grids of the display and the camera sensor arrays. This work identifies two problems that are largely ignored by existing image demoir\'eing approaches: 1) moir\'e patterns vary across different channels (RGB); 2) repetitive patterns are constantly observed. However, employing conventional convolutional (CNN) layers cannot address these problems. Instead, this paper presents the use of our recently proposed \emph{Shape} concept. It was originally employed to model consistent features from fragmented regions, particularly when identical or similar objects coexist in an RGB-D image. Interestingly, we find that the Shape information effectively captures the moir\'e patterns in artifact images. Motivated by this discovery, we propose a new method, ShapeMoir\'e, for image demoir\'eing. Beyond modeling shape features at the patch-level, we further extend this to the global image-level and design a novel Shape-Architecture. Consequently, our proposed method, equipped with both ShapeConv and Shape-Architecture, can be seamlessly integrated into existing approaches without introducing any additional parameters or computation overhead during inference. We conduct extensive experiments on four widely used datasets, and the results demonstrate that our ShapeMoir\'e achieves state-of-the-art performance, particularly in terms of the PSNR metric.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Y. Zhang, J. Liu, W. Yang, and Z. Guo, “Image super-resolution based on structure-modulated sparse representation,” IEEE Transactions on Image Processing, vol. 24, no. 9, pp. 2797–2810, 2015.
  2. P. Wei, Z. Xie, G. Li, and L. Lin, “Taylor neural network for real-world image super-resolution,” IEEE Transactions on Image Processing, vol. 32, pp. 1942–1951, 2023.
  3. C. Dong, Y. Deng, C. C. Loy, and X. Tang, “Compression artifacts reduction by a deep convolutional network,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 576–584.
  4. J. Yang, F. Liu, H. Yue, X. Fu, C. Hou, and F. Wu, “Textured image demoiréing via signal decomposition and guided filtering,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3528–3541, 2017.
  5. B. He, C. Wang, B. Shi, and L.-Y. Duan, “Mop moire patterns using mopnet,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2424–2432.
  6. ——, “Fhde 2 net: Full high definition demoireing network,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII 16.   Springer, 2020, pp. 713–729.
  7. X. Yu, P. Dai, W. Li, L. Ma, J. Shen, J. Li, and X. Qi, “Towards efficient and scale-robust ultra-high-definition image demoiréing,” in European Conference on Computer Vision.   Springer, 2022, pp. 646–662.
  8. H. Yue, Y. Cheng, Y. Mao, C. Cao, and J. Yang, “Recaptured screen image demoiréing in raw domain,” IEEE Transactions on Multimedia, 2022.
  9. S. Yuan, R. Timofte, G. Slabaugh, A. Leonardis, B. Zheng, X. Ye, X. Tian, Y. Chen, X. Cheng, Z. Fu et al., “Aim 2019 challenge on image demoireing: Methods and results,” in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).   IEEE, 2019, pp. 3534–3545.
  10. Y. Sun, Y. Yu, and W. Wang, “Moiré photo restoration using multiresolution convolutional neural networks,” IEEE Transactions on Image Processing, vol. 27, no. 8, pp. 4160–4172, 2018.
  11. L. Liu, J. Liu, S. Yuan, G. Slabaugh, A. Leonardis, W. Zhou, and Q. Tian, “Wavelet-based dual-branch network for image demoiréing,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16.   Springer, 2020, pp. 86–102.
  12. B. Zheng, S. Yuan, G. Slabaugh, and A. Leonardis, “Image demoireing with learnable bandpass filters,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 3636–3645.
  13. J. Cao, H. Leng, D. Lischinski, D. Cohen-Or, C. Tu, and Y. Li, “Shapeconv: Shape-aware convolutional layer for indoor rgb-d semantic segmentation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 7088–7097.
  14. https://digitalanarchy.com/Flicker/main.html.
  15. B. Liu, X. Shu, and X. Wu, “Demoir\\\backslash\’eing of camera-captured screen images using deep convolutional neural network,” arXiv preprint arXiv:1804.03809, 2018.
  16. X. Cheng, Z. Fu, and J. Yang, “Multi-scale dynamic feature encoding network for image demoiréing,” in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).   IEEE, 2019, pp. 3486–3493.
  17. J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “Swinir: Image restoration using swin transformer,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 1833–1844.
  18. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 14 821–14 831.
  19. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5728–5739.
  20. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18.   Springer, 2015, pp. 234–241.
  21. H. Zhang, Y. Dai, H. Li, and P. Koniusz, “Deep stacked hierarchical multi-patch network for image deblurring,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 5978–5986.
  22. C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV 13.   Springer, 2014, pp. 184–199.
  23. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE transactions on image processing, vol. 26, no. 7, pp. 3142–3155, 2017.
  24. W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 624–632.
  25. K. Zhang, W. Zuo, and L. Zhang, “Learning a single convolutional super-resolution network for multiple degradations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3262–3271.
  26. L. Wang, Y. Wang, Z. Liang, Z. Lin, J. Yang, W. An, and Y. Guo, “Learning parallax attention for stereo image super-resolution,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 12 250–12 259.
  27. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  28. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
  29. Z. Xiong, S. Chen, Y. Wang, L. Mou, and X. X. Zhu, “Gamus: A geometry-aware multi-modal semantic segmentation benchmark for remote sensing data,” arXiv preprint arXiv:2305.14914, 2023.
  30. J. Zhong, M. Li, H. Zhang, and J. Qin, “Combining photogrammetric computer vision and semantic segmentation for fine-grained understanding of coral reef growth under climate change,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 186–195.
  31. Q. Zhou, J. Cao, H. Leng, Y. Yin, Y. Kun, and R. Zimmermann, “Sogdet: Semantic-occupancy guided multi-view 3d object detection,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 7, 2024, pp. 7668–7676.
  32. Y. Zhang, M. Lin, X. Li, H. Liu, G. Wang, F. Chao, S. Ren, Y. Wen, X. Chen, and R. Ji, “Real-time image demoireing on mobile devices,” arXiv preprint arXiv:2302.02184, 2023.
  33. S. Xu, B. Song, X. Chen, and J. Zhou, “Image demoireing in raw and srgb domains,” arXiv preprint arXiv:2312.09063, 2023.
  34. H. Wang, Q. Tian, L. Li, and X. Guo, “Image demoiréing with a dual-domain distilling network,” in ICME.   IEEE, 2021, pp. 1–6.
  35. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  36. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  37. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 586–595.
Citations (1)

Summary

We haven't generated a summary for this paper yet.