Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Watermarking for Neural Radiation Fields by Invertible Neural Network (2312.02456v1)

Published 5 Dec 2023 in cs.CR

Abstract: To protect the copyright of the 3D scene represented by the neural radiation field, the embedding and extraction of the neural radiation field watermark are considered as a pair of inverse problems of image transformations. A scheme for protecting the copyright of the neural radiation field is proposed using invertible neural network watermarking, which utilizes watermarking techniques for 2D images to achieve the protection of the 3D scene. The scheme embeds the watermark in the training image of the neural radiation field through the forward process in the invertible network and extracts the watermark from the image rendered by the neural radiation field using the inverse process to realize the copyright protection of both the neural radiation field and the 3D scene. Since the rendering process of the neural radiation field can cause the loss of watermark information, the scheme incorporates an image quality enhancement module, which utilizes a neural network to recover the rendered image and then extracts the watermark. The scheme embeds a watermark in each training image to train the neural radiation field and enables the extraction of watermark information from multiple viewpoints. Simulation experimental results demonstrate the effectiveness of the method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “NeRF: Representing scenes as neural radiance fields for view synthesis,” vol. abs/2003.08934.
  2. P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang, “NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction,” p. arXiv:2106.10689, _eprint: 2106.10689.
  3. X. Zhang, P. P. Srinivasan, B. Deng, P. Debevec, W. T. Freeman, and J. T. Barron, “NeRFactor: Neural factorization of shape and reflectance under an unknown illumination,” p. arXiv:2106.01970, _eprint: 2106.01970.
  4. J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-NeRF: A multiscale representation for anti-aliasing neural radiance fields,” pp. 5835–5844.
  5. M. Tancik, V. Casser, X. Yan, S. Pradhan, B. Mildenhall, P. P. Srinivasan, J. T. Barron, and H. Kretzschmar, “Block-NeRF: Scalable large scene neural view synthesis,” p. arXiv:2202.05263, _eprint: 2202.05263.
  6. K. Schwarz, A. Sauer, M. Niemeyer, Y. Liao, and A. Geiger, “VoxGRAF: Fast 3d-aware image synthesis with sparse voxel grids,” p. arXiv:2206.07695, _eprint: 2206.07695.
  7. A. Yu, S. Fridovich-Keil, M. Tancik, Q. Chen, B. Recht, and A. Kanazawa, “Plenoxels: Radiance fields without neural networks,” pp. 5491–5500.
  8. T. Müller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,” p. arXiv:2201.05989, _eprint: 2201.05989.
  9. L. Wang, J. Zhang, X. Liu, F. Zhao, Y. Zhang, Y. Zhang, M. Wu, L. Xu, and J. Yu, “Fourier PlenOctrees for dynamic radiance field rendering in real-time,” p. arXiv:2202.08614, _eprint: 2202.08614.
  10. C. Sun, M. Sun, and H.-T. Chen, “Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction,” p. arXiv:2111.11215, _eprint: 2111.11215.
  11. D. Xu, P. Wang, Y. Jiang, Z. Fan, and Z. Wang, “Signal processing for implicit neural representations,” p. arXiv:2210.08772, _eprint: 2210.08772.
  12. D. Chen, Y. Liu, L. Huang, B. Wang, and P. Pan, “GeoAug: Data augmentation for few-shot NeRF with geometry constraints,” in Computer Vision – ECCV 2022, S. Avidan, G. Brostow, M. Cissé, G. M. Farinella, and T. Hassner, Eds.   Springer Nature Switzerland, pp. 322–337.
  13. A. Chen, Z. Xu, F. Zhao, X. Zhang, F. Xiang, J. Yu, and H. Su, “MVSNeRF: Fast generalizable radiance field reconstruction from multi-view stereo,” pp. 14 104–14 113.
  14. A. Yu, V. Ye, M. Tancik, and A. Kanazawa, “pixelNeRF: Neural radiance fields from one or few images,” p. arXiv:2012.02190, _eprint: 2012.02190.
  15. J. Zhang, Y. Zhang, H. Fu, X. Zhou, B. Cai, J. Huang, R. Jia, B. Zhao, and X. Tang, “Ray priors through reprojection: Improving neural radiance fields for novel view extrapolation,” p. arXiv:2205.05922, _eprint: 2205.05922.
  16. M. Niemeyer, J. T. Barron, B. Mildenhall, M. S. M. Sajjadi, A. Geiger, and N. Radwan, “RegNeRF: Regularizing neural radiance fields for view synthesis from sparse inputs,” p. arXiv:2112.00724, _eprint: 2112.00724.
  17. C. Qin and X. Zhang, “Effective reversible data hiding in encrypted image with privacy protection for image content,” vol. 31, pp. 154–164. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S104732031500108X
  18. X. Liao and C. Shu, “Reversible data hiding in encrypted images based on absolute mean difference of multiple neighboring pixels,” vol. 28, pp. 21–27. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1047320314002132
  19. F. Uccheddu, M. Corsini, and M. Barni, “Wavelet-based blind watermarking of 3d models,” in Workshop on Multimedia & Security.
  20. E. Praun, H. Hoppe, and A. Finkelstein, “Robust mesh watermarking,” in Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ’99.   ACM Press/Addison-Wesley Publishing Co., pp. 49–56. [Online]. Available: https://doi.org/10.1145/311535.311540
  21. R. Ohbuchi, A. Mukaiyama, and S. Takahashi, “A frequency‐domain approach to watermarking 3d shapes,” vol. 21.
  22. J.-U. Hou, D.-G. Kim, and H.-K. Lee, “Blind 3d mesh watermarking for 3d printed model by analyzing layering artifact,” vol. 12, pp. 2712–2725.
  23. J. Son, D. Kim, H.-Y. Choi, H.-U. Jang, and S. Choi, “Perceptual 3d watermarking using mesh saliency,” in Information Science and Applications 2017, K. Kim and N. Joukov, Eds.   Springer Singapore, pp. 315–322.
  24. M. Hamidi, A. Chetouani, M. El Haziti, M. El Hassouni, and H. Cherifi, “Blind robust 3-d mesh watermarking based on mesh saliency and QIM quantization for copyright protection,” in Pattern Recognition and Image Analysis, A. Morales, J. Fierrez, J. S. Sánchez, and B. Ribeiro, Eds.   Springer International Publishing, pp. 170–181.
  25. J. Liu, Y. Yang, D. Ma, W. He, and Y. Wang, “A novel watermarking algorithm for three-dimensional point-cloud models based on vertex curvature,” vol. 15.
  26. Y. Uchida, Y. Nagai, S. Sakazawa, and S. Satoh, “Embedding watermarks into deep neural networks.”
  27. Y. Adi, C. Baum, M. Cisse, B. Pinkas, and J. Keshet, “Turning your weakness into a strength: Watermarking deep neural networks by backdooring,” p. arXiv:1802.04633, _eprint: 1802.04633.
  28. H. Wu, G. Liu, Y. Yao, and X. Zhang, “Watermarking neural networks with watermarked images,” vol. 31, no. 7, pp. 2591–2601.
  29. X. Guan, H. Feng, W. Zhang, H. Zhou, J. Zhang, and N. Yu, “Reversible watermarking in deep convolutional neural networks for integrity authentication,” p. arXiv:2104.04268, _eprint: 2104.04268.
  30. C. Li, B. Y. Feng, Z. Fan, P. Pan, and Z. Wang, “StegaNeRF: Embedding invisible information within neural radiance fields.” [Online]. Available: http://arxiv.org/abs/2212.01602
  31. J. Fridrich, M. Goljan, and R. Du, “Detecting LSB steganography in color, and gray-scale images,” vol. 8, no. 4, pp. 22–28.
  32. X. Weng, Y. Li, L. Chi, and Y. Mu, “High-capacity convolutional video steganography with temporal residual modeling.” [Online]. Available: https://api.semanticscholar.org/CorpusID:174802332
  33. S. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” vol. 11, no. 7, pp. 674–693.
  34. S. Baluja, “Hiding images in plain sight: Deep steganography,” in Neural Information Processing Systems. [Online]. Available: https://api.semanticscholar.org/CorpusID:29764034
  35. J. Jing, X. Deng, M. Xu, J. Wang, and Z. Guan, “HiNet: Deep image hiding by invertible network,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV).   IEEE, pp. 4713–4722. [Online]. Available: https://ieeexplore.ieee.org/document/9711382/
  36. S.-P. Lu, R. Wang, T. Zhong, and P. L. Rosin, “Large-capacity image steganography based on invertible neural networks,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, pp. 10 811–10 820. [Online]. Available: https://ieeexplore.ieee.org/document/9577969/
Citations (1)

Summary

We haven't generated a summary for this paper yet.