RainyScape: Unsupervised Rainy Scene Reconstruction using Decoupled Neural Rendering (2404.11401v1)
Abstract: We propose RainyScape, an unsupervised framework for reconstructing clean scenes from a collection of multi-view rainy images. RainyScape consists of two main modules: a neural rendering module and a rain-prediction module that incorporates a predictor network and a learnable latent embedding that captures the rain characteristics of the scene. Specifically, based on the spectral bias property of neural networks, we first optimize the neural rendering pipeline to obtain a low-frequency scene representation. Subsequently, we jointly optimize the two modules, driven by the proposed adaptive direction-sensitive gradient-based reconstruction loss, which encourages the network to distinguish between scene details and rain streaks, facilitating the propagation of gradients to the relevant components. Extensive experiments on both the classic neural radiance field and the recently proposed 3D Gaussian splatting demonstrate the superiority of our method in effectively eliminating rain streaks and rendering clean images, achieving state-of-the-art performance. The constructed high-quality dataset and source code will be publicly available.
- B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “NeRF: Representing scenes as neural radiance fields for view synthesis,” in The European Conference on Computer Vision (ECCV), 2020.
- B. Wen, J. Tremblay, V. Blukis, S. Tyree, T. Müller, A. Evans, D. Fox, J. Kautz, and S. Birchfield, “Bundlesdf: Neural 6-dof tracking and 3d reconstruction of unknown objects,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 606–617.
- J.-W. Bian, W. Bian, V. A. Prisacariu, and P. Torr, “Porf: Pose residual field for accurate neural surface reconstruction,” in International Conference on Learning Representations (ICLR), 2024.
- C. Wang, M. Chai, M. He, D. Chen, and J. Liao, “Clip-nerf: Text-and-image driven manipulation of neural radiance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3835–3844.
- M. Zhenxing and D. Xu, “Switch-nerf: Learning scene decomposition with mixture of experts for large-scale neural radiance fields,” in International Conference on Learning Representations (ICLR), 2022.
- L. Ma, X. Li, J. Liao, Q. Zhang, X. Wang, J. Wang, and P. V. Sander, “Deblur-nerf: Neural radiance fields from blurry images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12 861–12 870.
- X. Huang, Q. Zhang, Y. Feng, H. Li, X. Wang, and Q. Wang, “Hdr-nerf: High dynamic range neural radiance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 18 398–18 408.
- W. Chen, W. Yifan, S. Kuo, and G. Wetzstein, “Dehazenerf: Multiple image haze removal and 3d shape reconstruction using neural radiance fields,” in 3DV, 2024.
- B. Kerbl, G. Kopanas, T. Leimkühler, and G. Drettakis, “3d gaussian splatting for real-time radiance field rendering,” ACM Transactions on Graphics, vol. 42, no. 4, pp. 1–14, 2023.
- Autodesk, INC., “Maya.” [Online]. Available: https:/autodesk.com/maya
- C. Reiser, S. Peng, Y. Liao, and A. Geiger, “Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 14 335–14 345.
- S. Fridovich-Keil, A. Yu, M. Tancik, Q. Chen, B. Recht, and A. Kanazawa, “Plenoxels: Radiance fields without neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5501–5510.
- A. Yu, V. Ye, M. Tancik, and A. Kanazawa, “pixelnerf: Neural radiance fields from one or few images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4578–4587.
- J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5855–5864.
- N. Pearl, T. Treibitz, and S. Korman, “Nan: Noise-aware nerfs for burst-denoising,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12 672–12 681.
- B. Mildenhall, P. Hedman, R. Martin-Brualla, P. P. Srinivasan, and J. T. Barron, “Nerf in the dark: High dynamic range view synthesis from noisy raw images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 190–16 199.
- K. Garg and S. K. Nayar, “When does a camera see rain?” in IEEE International Conference on Computer Vision, vol. 2. IEEE, 2005, pp. 1067–1074.
- L.-W. Kang, C.-W. Lin, and Y.-H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE transactions on image processing, vol. 21, no. 4, pp. 1742–1755, 2011.
- J.-H. Kim, C. Lee, J.-Y. Sim, and C.-S. Kim, “Single-image deraining using an adaptive nonlocal means filter,” in IEEE international conference on image processing. IEEE, 2013, pp. 914–917.
- Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain streak removal using layer priors,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2736–2744.
- Y. Luo, Y. Xu, and H. Ji, “Removing rain from a single image via discriminative sparse coding,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 3397–3405.
- R. Yasarla, V. A. Sindagi, and V. M. Patel, “Syn2real transfer learning for image deraining using gaussian processes,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2726–2736.
- Y.-L. Chen and C.-T. Hsu, “A generalized low-rank appearance model for spatio-temporally correlated rain streaks,” in Proceedings of the IEEE international conference on computer vision, 2013, pp. 1968–1975.
- X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3855–3863.
- X. Li, J. Wu, Z. Lin, H. Liu, and H. Zha, “Recurrent squeeze-and-excitation context aggregation net for single image deraining,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 254–269.
- H. Zhang and V. M. Patel, “Density-aware single image de-raining using a multi-stream dense network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 695–704.
- R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2482–2491.
- X. Hu, C.-W. Fu, L. Zhu, and P.-A. Heng, “Depth-attentional features for single-image rain removal,” in Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, 2019, pp. 8022–8031.
- T. Wang, X. Yang, K. Xu, S. Chen, Q. Zhang, and R. W. Lau, “Spatial attentive single-image deraining with a high quality real rain dataset,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 12 270–12 279.
- C. Yu, Y. Chang, Y. Li, X. Zhao, and L. Yan, “Unsupervised image deraining: Optimization model driven deep cnn,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 2634–2642.
- S. Ni, X. Cao, T. Yue, and X. Hu, “Controlling the rain: From removal to rendering,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 6328–6337.
- H. Wang, Z. Yue, Q. Xie, Q. Zhao, Y. Zheng, and D. Meng, “From rain generation to rain removal,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 14 791–14 801.
- J. Xiao, X. Fu, A. Liu, F. Wu, and Z.-J. Zha, “Image de-raining transformer,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
- X. Chen, H. Li, M. Li, and J. Pan, “Learning a sparse transformer network for effective image deraining,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5896–5905.
- S. Chen, T. Ye, J. Bai, E. Chen, J. Shi, and L. Zhu, “Sparse sampling transformer with uncertainty-driven ranking for unified removal of raindrops and rain streaks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 13 106–13 117.
- K. Garg and S. K. Nayar, “Detection and removal of rain from videos,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition., vol. 1, 2004, pp. I–I.
- J. Chen, C.-H. Tan, J. Hou, L.-P. Chau, and H. Li, “Robust video content alignment and compensation for rain removal in a cnn framework,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6286–6295.
- M. Li, Q. Xie, Q. Zhao, W. Wei, S. Gu, J. Tao, and D. Meng, “Video rain streak removal by multiscale convolutional sparse coding,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6644–6653.
- W. Yang, J. Liu, and J. Feng, “Frame-consistent recurrent video deraining with dual-level flow,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 1661–1670.
- W. Yang, R. T. Tan, S. Wang, and J. Liu, “Self-learning video rain streak removal: When cyclic consistency meets temporal correspondence,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 1720–1729.
- Z. Yue, J. Xie, Q. Zhao, and D. Meng, “Semi-supervised video deraining with dynamical rain generator,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 642–652.
- W. Yan, L. Xu, W. Yang, and R. T. Tan, “Feature-aligned video raindrop removal with temporal constraints,” IEEE Transactions on Image Processing, vol. 31, pp. 3440–3448, 2022.
- K. Zhang, D. Li, W. Luo, W. Ren, and W. Liu, “Enhanced spatio-temporal interaction learning for video deraining: faster and better,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 1, pp. 1287–1293, 2023.
- B. Lin, Y. Jin, W. Yan, W. Ye, Y. Yuan, S. Zhang, and R. T. Tan, “Nightrain: Nighttime video deraining via adaptive-rain-removal and adaptive-correction,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 4, 2024, pp. 3378–3385.
- Y. Ding, M. Li, T. Yan, F. Zhang, Y. Liu, and R. W. Lau, “Rain streak removal from light field images,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 2, pp. 467–482, 2021.
- T. Yan, M. Li, B. Li, Y. Yang, and R. W. H. Lau, “Rain removal from light field images with 4d convolution and multi-scale gaussian process,” IEEE Transactions on Image Processing, vol. 32, pp. 921–936, 2023.
- D. Ren, W. Zuo, Q. Hu, P. Zhu, and D. Meng, “Progressive image deraining networks: A better and simpler baseline,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3937–3946.
- H. Wang, Q. Xie, Q. Zhao, and D. Meng, “A model-driven deep neural network for single image rain removal,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 3103–3112.
- N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. Hamprecht, Y. Bengio, and A. Courville, “On the spectral bias of neural networks,” in International conference on machine learning. PMLR, 2019, pp. 5301–5310.
- T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, and Y. Wang, “Fastderain: A novel video rain streak removal method using directional gradient priors,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 2089–2102, 2018.
- M. Girolami and B. Calderhead, “Riemann manifold langevin and hamiltonian monte carlo methods,” Journal of the Royal Statistical Society Series B: Statistical Methodology, vol. 73, no. 2, pp. 123–214, 2011.
- L. Yen-Chen, “Nerf-pytorch,” https://github.com/yenchenlin/nerf-pytorch/, 2020.
- D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), San Diega, CA, USA, 2015.
- K. Garg and S. K. Nayar, “Photorealistic rendering of rain streaks,” ACM Transactions on Graphics (TOG), vol. 25, no. 3, pp. 996–1002, 2006.
- T. Yan, M. Li, B. Li, Y. Yang, and R. Lau, “Rain removal from light field images with 4d convolution and multi-scale gaussian process,” IEEE Transactions on Image Processing, vol. 32, pp. 921–936, 2023.
- W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep joint rain detection and removal from a single image,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1357–1366.
- J. Liu, W. Yang, S. Yang, and Z. Guo, “Erase or fill? deep joint recurrent rain removal and reconstruction in videos,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3233–3242.