NeRT: Implicit Neural Representations for General Unsupervised Turbulence Mitigation (2308.00622v2)
Abstract: The atmospheric and water turbulence mitigation problems have emerged as challenging inverse problems in computer vision and optics communities over the years. However, current methods either rely heavily on the quality of the training dataset or fail to generalize over various scenarios, such as static scenes, dynamic scenes, and text reconstructions. We propose a general implicit neural representation for unsupervised atmospheric and water turbulence mitigation (NeRT). NeRT leverages the implicit neural representations and the physically correct tilt-then-blur turbulence model to reconstruct the clean, undistorted image, given only dozens of distorted input images. Moreover, we show that NeRT outperforms the state-of-the-art through various qualitative and quantitative evaluations of atmospheric and water turbulence datasets. Furthermore, we demonstrate the ability of NeRT to eliminate uncontrolled turbulence from real-world environments. Lastly, we incorporate NeRT into continuously captured video sequences and demonstrate $48 \times$ speedup.
- B. Y. Feng, M. Xie, and C. A. Metzler, “Turbugan: An adversarial learning approach to spatially-varying multiframe blind deconvolution with applications to imaging through turbulence,” IEEE Journal on Selected Areas in Information Theory, 2023.
- X. Zhang, Z. Mao, N. Chimitt, and S. H. Chan, “Imaging through the atmosphere using turbulence mitigation transformer,” arXiv preprint arXiv:2207.06465, 2022.
- Z. Mao, N. Chimitt, and S. H. Chan, “Image reconstruction of static and dynamic scenes through anisoplanatic turbulence,” IEEE Transactions on Computational Imaging, vol. 6, pp. 1415–1428, 2020.
- D. Jin, Y. Chen, Y. Lu, J. Chen, P. Wang, Z. Liu, S. Guo, and X. Bai, “Neutralizing the impact of atmospheric turbulence on complex scene imaging via deep learning,” Nature Machine Intelligence, vol. 3, no. 10, pp. 876–884, 2021.
- J. Gao, N. Anantrasirichai, and D. Bull, “Atmospheric turbulence removal using convolutional neural network,” arXiv preprint arXiv:1912.11350, 2019.
- R. Nieuwenhuizen and K. Schutte, “Deep learning for software-based turbulence mitigation in long-range imaging,” in Artificial Intelligence and Machine Learning in Defense Applications, vol. 11169. SPIE, 2019, pp. 153–162.
- Z. Li, Z. Murez, D. Kriegman, R. Ramamoorthi, and M. Chandraker, “Learning to see through turbulent water,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2018, pp. 512–520.
- N. Chimitt and S. H. Chan, “Simulating anisoplanatic turbulence by sampling intermodal and spatially correlated zernike coefficients,” Optical Engineering, vol. 59, no. 8, pp. 083 101–083 101, 2020.
- Z. Mao, N. Chimitt, and S. H. Chan, “Accelerating atmospheric turbulence simulation via learned phase-to-space transform,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 14 759–14 768.
- N. Chimitt, X. Zhang, Z. Mao, and S. H. Chan, “Real-time dense field phase-to-space simulation of imaging through atmospheric turbulence,” IEEE Transactions on Computational Imaging, 2022.
- R. C. Hardie, J. D. Power, D. A. LeMaster, D. R. Droege, S. Gladysz, and S. Bose-Pillai, “Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis,” Optical Engineering, vol. 56, no. 7, pp. 071 502–071 502, 2017.
- A. Torralba and A. A. Efros, “Unbiased look at dataset bias,” in CVPR 2011. IEEE, 2011, pp. 1521–1528.
- E. Cole, Q. Meng, J. Pauly, and S. Vasanawala, “Learned compression of high dimensional image datasets,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 1748–1752.
- T. Zhang, T. Feng, S. Alam, S. Lee, M. Zhang, S. S. Narayanan, and S. Avestimehr, “Fedaudio: A federated learning benchmark for audio tasks,” arXiv preprint arXiv:2210.15707, 2022.
- M. Shimizu, S. Yoshimura, M. Tanaka, and M. Okutomi, “Super-resolution from image sequence under influence of hot-air optical turbulence,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2008, pp. 1–8.
- N. Anantrasirichai, A. Achim, and D. Bull, “Atmospheric turbulence mitigation for sequences with moving objects using recursive image fusion,” in 2018 25th IEEE international conference on image processing (ICIP). IEEE, 2018, pp. 2895–2899.
- T. Caliskan and N. Arica, “Atmospheric turbulence mitigation using optical flow,” in 2014 22nd International Conference on Pattern Recognition. Ieee, 2014, pp. 883–888.
- J. Gilles, T. Dagobert, and C. De Franchis, “Atmospheric turbulence restoration by diffeomorphic image registration and blind deconvolution,” in Advanced Concepts for Intelligent Vision Systems: 10th International Conference, ACIVS 2008, Juan-les-Pins, France, October 20-24, 2008. Proceedings 10. Springer, 2008, pp. 400–409.
- S. H. Chan, “Tilt-then-blur or blur-then-tilt? clarifying the atmospheric turbulence model,” IEEE Signal Processing Letters, vol. 29, pp. 1833–1837, 2022.
- N. Li, S. Thapa, C. Whyte, A. W. Reed, S. Jayasuriya, and J. Ye, “Unsupervised non-rigid image distortion removal via grid deformation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2522–2532.
- V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein, “Implicit neural representations with periodic activation functions,” Advances in Neural Information Processing Systems, vol. 33, pp. 7462–7473, 2020.
- M. Tancik, P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. Barron, and R. Ng, “Fourier features let networks learn high frequency functions in low dimensional domains,” Advances in Neural Information Processing Systems, vol. 33, pp. 7537–7547, 2020.
- B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
- R. Martin-Brualla, N. Radwan, M. S. Sajjadi, J. T. Barron, A. Dosovitskiy, and D. Duckworth, “Nerf in the wild: Neural radiance fields for unconstrained photo collections,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 7210–7219.
- Y. Chen, S. Liu, and X. Wang, “Learning continuous image representation with local implicit image function,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 8628–8638.
- L. Shen, J. Pauly, and L. Xing, “Nerp: implicit neural representation learning with prior embedding for sparsely sampled image reconstruction,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
- J. M. Wolterink, J. C. Zwienenberg, and C. Brune, “Implicit neural representations for deformable image registration,” in International Conference on Medical Imaging with Deep Learning. PMLR, 2022, pp. 1349–1359.
- Z. Mao, A. Jaiswal, Z. Wang, and S. H. Chan, “Single frame atmospheric turbulence mitigation: A benchmark study and a new physics-inspired transformer model,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XIX. Springer, 2022, pp. 430–446.
- S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 14 821–14 831.
- S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5728–5739.
- D. Park, D. U. Kang, J. Kim, and S. Y. Chun, “Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VI 16. Springer, 2020, pp. 327–343.
- R. Yasarla and V. M. Patel, “Learning to restore images degraded by atmospheric turbulence using uncertainty,” in 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021, pp. 1694–1698.
- K. Mei and V. M. Patel, “Ltt-gan: Looking through turbulence by inverting gans,” IEEE Journal of Selected Topics in Signal Processing, 2023.
- N. G. Nair, K. Mei, and V. M. Patel, “At-ddpm: Restoring faces degraded by atmospheric turbulence using denoising diffusion probabilistic models,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 3434–3443.
- K. C. Chan, S. Zhou, X. Xu, and C. C. Loy, “Basicvsr++: Improving video super-resolution with enhanced propagation and alignment,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5972–5981.
- J. Liang, J. Cao, Y. Fan, K. Zhang, R. Ranjan, Y. Li, R. Timofte, and L. Van Gool, “Vrt: A video restoration transformer,” arXiv preprint arXiv:2201.12288, 2022.
- O. Oreifej, X. Li, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 2, pp. 450–462, 2012.
- L. Xu, S. Zheng, and J. Jia, “Unnatural l0 sparse representation for natural image deblurring,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 1107–1114.
- Y. Tian and S. G. Narasimhan, “Seeing through water: Image restoration using model-based tracking,” in 2009 IEEE 12th International Conference on Computer Vision. Ieee, 2009, pp. 2303–2310.
- O. Oreifej, G. Shu, T. Pace, and M. Shah, “A two-stage reconstruction approach for seeing through water,” in CVPR 2011. IEEE, 2011, pp. 1153–1160.
- X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 1, pp. 157–170, 2012.
- J. Gilles and N. B. Ferrante, “Open turbulent image set (otis),” Pattern Recognition Letters, vol. 86, pp. 38–41, 2017.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- M. J. Huiskes and M. S. Lew, “The mir flickr retrieval evaluation,” in Proceedings of the 1st ACM international conference on Multimedia information retrieval, 2008, pp. 39–43.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.