Steganography for Neural Radiance Fields by Backdooring (2309.10503v1)
Abstract: The utilization of implicit representation for visual data (such as images, videos, and 3D models) has recently gained significant attention in computer vision research. In this letter, we propose a novel model steganography scheme with implicit neural representation. The message sender leverages Neural Radiance Fields (NeRF) and its viewpoint synthesis capabilities by introducing a viewpoint as a key. The NeRF model generates a secret viewpoint image, which serves as a backdoor. Subsequently, we train a message extractor using overfitting to establish a one-to-one mapping between the secret message and the secret viewpoint image. The sender delivers the trained NeRF model and the message extractor to the receiver over the open channel, and the receiver utilizes the key shared by both parties to obtain the rendered image in the secret view from the NeRF model, and then obtains the secret message through the message extractor. The inherent complexity of the viewpoint information prevents attackers from stealing the secret message accurately. Experimental results demonstrate that the message extractor trained in this letter achieves high-capacity steganography with fast performance, achieving a 100\% accuracy in message extraction. Furthermore, the extensive viewpoint key space of NeRF ensures the security of the steganography scheme.
- A. Cheddad, J. Condell, K. Curran, and P. Mc Kevitt, “Digital image steganography: Survey and analysis of current methods,” Signal processing, vol. 90, no. 3, pp. 727–752, 2010.
- N. Provos and P. Honeyman, “Hide and seek: An introduction to steganography,” IEEE security & privacy, vol. 1, no. 3, pp. 32–44, 2003.
- C. Doersch, “Tutorial on variational autoencoders,” arXiv preprint arXiv:1606.05908, 2016.
- A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, “Generative adversarial networks: An overview,” IEEE signal processing magazine, vol. 35, no. 1, pp. 53–65, 2018.
- Z. C. Lipton, D. C. Kale, C. Elkan, and R. Wetzel, “Learning to diagnose with lstm recurrent neural networks,” arXiv preprint arXiv:1511.03677, 2015.
- D. P. Kingma and P. Dhariwal, “Glow: Generative flow with invertible 1x1 convolutions,” Advances in neural information processing systems, vol. 31, 2018.
- L. Jia, K. Yan, L. Yu et al., “Application of generative adversarial network in image steganography,” Journal of Wuhan University (Natural Science Edition), Papers, vol. 65, no. 2, pp. 139–152, 2019.
- M.-m. Liu, M.-q. Zhang, J. Liu, Y.-n. Zhang, and Y. Ke, “Coverless information hiding based on generative adversarial networks,” arXiv preprint arXiv:1712.06951, 2017.
- Y. Ke, M.-q. Zhang, J. Liu, T.-t. Su, and X.-y. Yang, “Generative steganography with kerckhoffs’ principle,” Multimedia Tools and Applications, vol. 78, no. 10, pp. 13 805–13 818, 2019.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
- C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
- Y. Uchida, Y. Nagai, S. Sakazawa, and S. Satoh, “Embedding watermarks into deep neural networks,” in Proceedings of the 2017 ACM on international conference on multimedia retrieval, 2017, pp. 269–277.
- Y. Adi, C. Baum, M. Cisse, B. Pinkas, and J. Keshet, “Turning your weakness into a strength: Watermarking deep neural networks by backdooring,” in 27th USENIX Security Symposium (USENIX Security 18), 2018, pp. 1615–1631.
- H. Wu, G. Liu, Y. Yao, and X. Zhang, “Watermarking neural networks with watermarked images,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 7, pp. 2591–2601, 2020.
- Z. Yang, Z. Wang, and X. Zhang, “A general steganographic framework for neural network models,” Information Sciences, p. 119250, 2023.
- G. Li, S. Li, M. Li, X. Zhang, and Z. Qian, “Steganography of steganographic networks,” arXiv preprint arXiv:2302.14521, 2023.
- H. Chen, L. Song, Z. Qian, X. Zhang, and K. Ma, “Hiding images in deep probabilistic models,” Advances in Neural Information Processing Systems, vol. 35, pp. 36 776–36 788, 2022.
- Z. Wang, G. Feng, H. Wu, and X. Zhang, “Data hiding in neural networks for multiple receivers [research frontier],” IEEE Computational Intelligence Magazine, vol. 16, no. 4, pp. 70–84, 2021.
- Z. Yang, Z. Wang, X. Zhang, and Z. Tang, “Multi-source data hiding in neural networks,” in 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2022, pp. 1–6.
- B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
- C. Li, B. Y. Feng, Z. Fan, P. Pan, and Z. Wang, “Steganerf: Embedding invisible information within neural radiance fields,” arXiv preprint arXiv:2212.01602, 2022.
- J. T. Kajiya and B. P. Von Herzen, “Ray tracing volume densities,” ACM SIGGRAPH computer graphics, vol. 18, no. 3, pp. 165–174, 1984.
- K. A. Zhang, A. Cuesta-Infante, L. Xu, and K. Veeramachaneni, “Steganogan: High capacity image steganography with gans,” arXiv preprint arXiv:1901.03892, 2019.