Perceptive self-supervised learning network for noisy image watermark removal (2403.02211v1)
Abstract: Popular methods usually use a degradation model in a supervised way to learn a watermark removal model. However, it is true that reference images are difficult to obtain in the real world, as well as collected images by cameras suffer from noise. To overcome these drawbacks, we propose a perceptive self-supervised learning network for noisy image watermark removal (PSLNet) in this paper. PSLNet depends on a parallel network to remove noise and watermarks. The upper network uses task decomposition ideas to remove noise and watermarks in sequence. The lower network utilizes the degradation model idea to simultaneously remove noise and watermarks. Specifically, mentioned paired watermark images are obtained in a self supervised way, and paired noisy images (i.e., noisy and reference images) are obtained in a supervised way. To enhance the clarity of obtained images, interacting two sub-networks and fusing obtained clean images are used to improve the effects of image watermark removal in terms of structural information and pixel enhancement. Taking into texture information account, a mixed loss uses obtained images and features to achieve a robust model of noisy image watermark removal. Comprehensive experiments show that our proposed method is very effective in comparison with popular convolutional neural networks (CNNs) for noisy image watermark removal. Codes can be obtained at https://github.com/hellloxiaotian/PSLNet.
- R. Clouard, A. Renouf, and M. Revenu, “Human–computer interaction for the generation of image processing applications,” International Journal of Human-Computer Studies, vol. 69, no. 4, pp. 201–219, 2011.
- Y. Hu, S. Kwong, and J. Huang, “An algorithm for removable visible watermarking,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, no. 1, pp. 129–133, 2005.
- Y. Hu and B. Jeon, “Reversible visible watermarking and lossless recovery of original images,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, no. 11, pp. 1423–1429, 2006.
- X. Chen, W. Wang, C. Bender, Y. Ding, R. Jia, B. Li, and D. Song, “Refit: a unified watermark removal framework for deep learning systems with limited data,” in Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, 2021, pp. 321–335.
- C. Qin, Z. He, H. Yao, F. Cao, and L. Gao, “Visible watermark removal scheme based on reversible data hiding and image inpainting,” Signal Processing: Image Communication, vol. 60, pp. 160–172, 2018.
- A. Westfeld, “A regression-based restoration technique for automated watermark removal,” in Proceedings of the 10th ACM workshop on Multimedia and security, 2008, pp. 215–220.
- T.-C. Hsu, W.-S. Hsieh, J. Y. Chiang, and T.-S. Su, “New watermark-removal method based on eigen-image energy,” IET Information Security, vol. 5, no. 1, pp. 43–50, 2011.
- P. Nikbakht and M. Mahdavi, “Targeted watermark removal of a svd-based image watermarking scheme,” in 2015 7th Conference on Information and Knowledge Technology (IKT). IEEE, 2015, pp. 1–6.
- C. Xu, Y. Lu, and Y. Zhou, “An automatic visible watermark removal technique using image inpainting algorithms,” in 2017 4th International Conference on Systems and Informatics (ICSAI). IEEE, 2017, pp. 1152–1157.
- H. Santoyo-Garcia, E. Fragoso-Navarro, R. Reyes-Reyes, G. Sanchez-Perez, M. Nakano-Miyatake, and H. Perez-Meana, “An automatic visible watermark detection method using total variation,” in 2017 5th International Workshop on Biometrics and Forensics (IWBF). IEEE, 2017, pp. 1–5.
- C. Tian, X. Zhang, J. C.-W. Lin, W. Zuo, Y. Zhang, and C.-W. Lin, “Generative adversarial networks for image super-resolution: A survey,” arXiv preprint arXiv:2204.13620, 2022.
- C. Tian, Y. Zhang, W. Zuo, C.-W. Lin, D. Zhang, and Y. Yuan, “A heterogeneous group cnn for image super-resolution,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–13, 2022.
- H. Yue, J. Liu, J. Yang, X. Sun, T. Q. Nguyen, and F. Wu, “Ienet: Internal and external patch matching convnet for web image guided denoising,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 11, pp. 3928–3942, 2019.
- D. Cheng, X. Li, W.-H. Li, C. Lu, F. Li, H. Zhao, and W.-S. Zheng, “Large-scale visible watermark detection and removal with deep convolutional networks,” in Pattern Recognition and Computer Vision: First Chinese Conference, PRCV 2018, Guangzhou, China, November 23-26, 2018, Proceedings, Part III 1. Springer, 2018, pp. 27–40.
- Z. Cao, S. Niu, J. Zhang, and X. Wang, “Generative adversarial networks model for visible watermark removal,” IET Image Processing, vol. 13, no. 10, pp. 1783–1789, 2019.
- X. Li, C. Lu, D. Cheng, W.-H. Li, M. Cao, B. Liu, J. Ma, and W.-S. Zheng, “Towards photo-realistic visible watermark removal with conditional generative adversarial networks,” in Image and Graphics: 10th International Conference, ICIG 2019, Beijing, China, August 23–25, 2019, Proceedings, Part I 10. Springer, 2019, pp. 345–356.
- P. Jiang, S. He, H. Yu, and Y. Zhang, “Two-stage visible watermark removal architecture based on deep learning,” IET Image Processing, vol. 14, no. 15, pp. 3819–3828, 2020.
- X. Cun and C.-M. Pun, “Split then refine: stacked attention-guided resunets for blind single image visible watermark removal,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 2, 2021, pp. 1184–1192.
- Y. Liu, Z. Zhu, and X. Bai, “Wdnet: Watermark-decomposition network for visible watermark removal,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 3685–3693.
- L. Fu, B. Shi, L. Sun, J. Zeng, D. Chen, H. Zhao, and C. Tian, “An improved u-net for watermark removal,” Electronics, vol. 11, no. 22, p. 3760, 2022.
- H. Li, G. Wang, Q. Hua, Z. Wen, Z. Li, and T. Lei, “An image watermark removal method for secure internet of things applications based on federated learning,” Expert Systems, 2022.
- X. Liu, J. Liu, Y. Bai, J. Gu, T. Chen, X. Jia, and X. Cao, “Watermark vaccine: Adversarial attacks to prevent watermark removal,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XIV. Springer, 2022, pp. 1–17.
- C. Tian, L. Fei, W. Zheng, Y. Xu, W. Zuo, and C.-W. Lin, “Deep learning on image denoising: An overview,” Neural Networks, vol. 131, pp. 251–275, 2020.
- J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189, 2018.
- T. Dekel, M. Rubinstein, C. Liu, and W. T. Freeman, “On the effectiveness of visible watermarks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2146–2154.
- K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 2015, pp. 234–241.
- J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132–7141.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge: A retrospective,” International Journal of Computer Vision, vol. 111, pp. 98–136, 2015.
- A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems 32, 2019, pp. 8024–8035.
- P. H. Wong, O. C. Au, and Y. M. Yeung, “Novel blind multiple watermarking technique for images,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 8, pp. 813–830, 2003.
- C. Tian, Y. Xu, W. Zuo, C.-W. Lin, and D. Zhang, “Asymmetric cnn for image superresolution,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 6, pp. 3718–3730, 2021.
- Y. Guo, Q. Jin, J.-M. Morel, T. Zeng, and G. Facciolo, “Joint demosaicking and denoising benefits from a two-stage training strategy,” Journal of Computational and Applied Mathematics, p. 115330, 2023.
- H. Tan, H. Xiao, Y. Liu, M. Zhang et al., “Two-stage cnn model for joint demosaicing and denoising of burst bayer images,” Computational Intelligence and Neuroscience, vol. 2022, 2022.
- A. Hore and D. Ziou, “Image quality metrics: Psnr vs. ssim,” in 2010 20th International Conference on Pattern Recognition. IEEE, 2010, pp. 2366–2369.
- T. Chai and R. R. Draxler, “Root mean square error (rmse) or mean absolute error (mae)–arguments against avoiding rmse in the literature,” Geoscientific Model Development, vol. 7, no. 3, pp. 1247–1250, 2014.
- K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn-based image denoising,” IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4608–4622, 2018.
- K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern recognition, 2017, pp. 3929–3938.
- X. Wang, Z. Li, H. Shan, Z. Tian, Y. Ren, and W. Zhou, “Fastderainnet: A deep learning algorithm for single image deraining,” IEEE Access, vol. 8, pp. 127 622–127 630, 2020.
- S. Deng, M. Wei, J. Wang, L. Liang, H. Xie, and M. Wang, “Drd-net: Detail-recovery image deraining via context aggregation networks,” arXiv preprint arXiv:1908.10267, 2019.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
- R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern recognition, 2018, pp. 586–595.