Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A self-supervised CNN for image watermark removal (2403.05807v1)

Published 9 Mar 2024 in cs.CV and eess.IV

Abstract: Popular convolutional neural networks mainly use paired images in a supervised way for image watermark removal. However, watermarked images do not have reference images in the real world, which results in poor robustness of image watermark removal techniques. In this paper, we propose a self-supervised convolutional neural network (CNN) in image watermark removal (SWCNN). SWCNN uses a self-supervised way to construct reference watermarked images rather than given paired training samples, according to watermark distribution. A heterogeneous U-Net architecture is used to extract more complementary structural information via simple components for image watermark removal. Taking into account texture information, a mixed loss is exploited to improve visual effects of image watermark removal. Besides, a watermark dataset is conducted. Experimental results show that the proposed SWCNN is superior to popular CNNs in image watermark removal.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. G. W. Braudaway, “Protecting publicly-available images with an invisible image watermark,” in Proceedings of international conference on image processing, vol. 1.   IEEE, 1997, pp. 524–527.
  2. X.-L. Liu, C.-C. Lin, and S.-M. Yuan, “Blind dual watermarking for color images’ authentication and copyright protection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 5, pp. 1047–1055, 2016.
  3. S.-J. Lee and S.-H. Jung, “A survey of watermarking techniques applied to multimedia,” in ISIE 2001. 2001 IEEE International Symposium on Industrial Electronics Proceedings (Cat. No. 01TH8570), vol. 1.   IEEE, 2001, pp. 272–277.
  4. H.-H. Tsai and D.-W. Sun, “Color image watermark extraction based on support vector machines,” Information Sciences, vol. 177, no. 2, pp. 550–569, 2007.
  5. P. H. Wong, O. C. Au, and Y. M. Yeung, “Novel blind multiple watermarking technique for images,” IEEE transactions on circuits and systems for video technology, vol. 13, no. 8, pp. 813–830, 2003.
  6. Y. Hu, S. Kwong, and J. Huang, “An algorithm for removable visible watermarking,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, no. 1, pp. 129–133, 2005.
  7. Y. Hu and B. Jeon, “Reversible visible watermarking and lossless recovery of original images,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, no. 11, pp. 1423–1429, 2006.
  8. J. Park, Y.-W. Tai, and I. S. Kweon, “Identigram/watermark removal using cross-channel correlation,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition.   IEEE, 2012, pp. 446–453.
  9. A. Westfeld, “A regression-based restoration technique for automated watermark removal,” in Proceedings of the 10th ACM workshop on Multimedia and security, 2008, pp. 215–220.
  10. H. Santoyo-Garcia, E. Fragoso-Navarro, R. Reyes-Reyes, G. Sanchez-Perez, M. Nakano-Miyatake, and H. Perez-Meana, “An automatic visible watermark detection method using total variation,” in 2017 5th International Workshop on Biometrics and Forensics (IWBF).   IEEE, 2017, pp. 1–5.
  11. R. Cong, N. Yang, C. Li, H. Fu, Y. Zhao, Q. Huang, and S. Kwong, “Global-and-local collaborative learning for co-salient object detection,” arXiv preprint arXiv:2204.08917, 2022.
  12. R. Cong, Y. Zhang, L. Fang, J. Li, Y. Zhao, and S. Kwong, “Rrnet: Relational reasoning network with parallel multiscale attention for salient object detection in optical remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–11, 2021.
  13. C. Tian, Y. Xu, W. Zuo, C.-W. Lin, and D. Zhang, “Asymmetric cnn for image superresolution,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021.
  14. X. Chen, W. Wang, Y. Ding, C. Bender, R. Jia, B. Li, and D. Song, “Leveraging unlabeled data for watermark removal of deep neural networks,” in ICML workshop on Security and Privacy of Machine Learning, 2019, pp. 1–6.
  15. J. Wang, H. Wu, X. Zhang, and Y. Yao, “Watermarking in deep neural networks via error back-propagation,” Electronic Imaging, vol. 2020, no. 4, pp. 22–1, 2020.
  16. D. Cheng, X. Li, W.-H. Li, C. Lu, F. Li, H. Zhao, and W.-S. Zheng, “Large-scale visible watermark detection and removal with deep convolutional networks,” in Chinese Conference on Pattern Recognition and Computer Vision (PRCV).   Springer, 2018, pp. 27–40.
  17. X. Li, C. Lu, D. Cheng, W.-H. Li, M. Cao, B. Liu, J. Ma, and W.-S. Zheng, “Towards photo-realistic visible watermark removal with conditional generative adversarial networks,” in International Conference on Image and Graphics.   Springer, 2019, pp. 345–356.
  18. Z. Cao, S. Niu, J. Zhang, and X. Wang, “Generative adversarial networks model for visible watermark removal,” IET Image Processing, vol. 13, no. 10, pp. 1783–1789, 2019.
  19. X. Chen, W. Wang, C. Bender, Y. Ding, R. Jia, B. Li, and D. Song, “Refit: a unified watermark removal framework for deep learning systems with limited data,” in Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, 2021, pp. 321–335.
  20. X. Cun and C.-M. Pun, “Split then refine: stacked attention-guided resunets for blind single image visible watermark removal,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 2, 2021, pp. 1184–1192.
  21. Y. Liu, Z. Zhu, and X. Bai, “Wdnet: Watermark-decomposition network for visible watermark removal,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 3685–3693.
  22. L. Fu, B. Shi, L. Sun, J. Zeng, D. Chen, H. Zhao, and C. Tian, “An improved u-net for watermark removal,” Electronics, vol. 11, no. 22, p. 3760, 2022.
  23. C. Song, S. Sudirman, M. Merabti, and D. Llewellyn-Jones, “Analysis of digital image watermark attacks,” in 2010 7th IEEE Consumer Communications and Networking Conference.   IEEE, 2010, pp. 1–5.
  24. J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189, 2018.
  25. T. Dekel, M. Rubinstein, C. Liu, and W. T. Freeman, “On the effectiveness of visible watermarks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2146–2154.
  26. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
  27. A. L. Maas, A. Y. Hannun, A. Y. Ng et al., “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, vol. 30, no. 1.   Atlanta, Georgia, USA, 2013, p. 3.
  28. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  29. M. Everingham, S. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge: A retrospective,” International journal of computer vision, vol. 111, no. 1, pp. 98–136, 2015.
  30. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” 2017.
  31. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE transactions on image processing, vol. 26, no. 7, pp. 3142–3155, 2017.
  32. Y. Tai, J. Yang, X. Liu, and C. Xu, “Memnet: A persistent memory network for image restoration,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 4539–4547.
  33. K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn-based image denoising,” IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4608–4622, 2018.
  34. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 9446–9454.
  35. J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative image inpainting with contextual attention,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5505–5514.
  36. S. Deng, M. Wei, J. Wang, L. Liang, H. Xie, and M. Wang, “Drd-net: Detail-recovery image deraining via context aggregation networks,” arXiv preprint arXiv:1908.10267, 2019.
  37. X. Wang, Z. Li, H. Shan, Z. Tian, Y. Ren, and W. Zhou, “Fastderainnet: A deep learning algorithm for single image deraining,” IEEE Access, vol. 8, pp. 127 622–127 630, 2020.
  38. C. Sun, H. Lai, L. Wang, and Z. Jia, “Efficient attention fusion network in wavelet domain for demoireing,” IEEE Access, vol. 9, pp. 53 392–53 400, 2021.
  39. P. Sahu, D. Yu, M. Dasari, F. Hou, and H. Qin, “A lightweight multi-section cnn for lung nodule classification and malignancy estimation,” IEEE journal of biomedical and health informatics, vol. 23, no. 3, pp. 960–968, 2018.
  40. R. Dolbeau, “Theoretical peak flops per instruction set: a tutorial,” The Journal of Supercomputing, vol. 74, no. 3, pp. 1341–1377, 2018.
  41. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal processing letters, vol. 20, no. 3, pp. 209–212, 2012.
  42. L. Zhang, L. Zhang, and A. C. Bovik, “A feature-enriched completely blind image quality evaluator,” IEEE Transactions on Image Processing, vol. 24, no. 8, pp. 2579–2591, 2015.
  43. Q. Wu, Z. Wang, and H. Li, “A highly efficient method for blind image quality assessment,” in 2015 IEEE International Conference on Image Processing (ICIP).   IEEE, 2015, pp. 339–343.
  44. Q. Wu, H. Li, F. Meng, K. N. Ngan, B. Luo, C. Huang, and B. Zeng, “Blind image quality assessment based on multichannel feature fusion and label transfer,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 3, pp. 425–440, 2015.
Citations (8)

Summary

We haven't generated a summary for this paper yet.