Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Imaging through the Atmosphere using Turbulence Mitigation Transformer (2207.06465v2)

Published 13 Jul 2022 in eess.IV and cs.CV

Abstract: Restoring images distorted by atmospheric turbulence is a ubiquitous problem in long-range imaging applications. While existing deep-learning-based methods have demonstrated promising results in specific testing conditions, they suffer from three limitations: (1) lack of generalization capability from synthetic training data to real turbulence data; (2) failure to scale, hence causing memory and speed challenges when extending the idea to a large number of frames; (3) lack of a fast and accurate simulator to generate data for training neural networks. In this paper, we introduce the turbulence mitigation transformer (TMT) that explicitly addresses these issues. TMT brings three contributions: Firstly, TMT explicitly uses turbulence physics by decoupling the turbulence degradation and introducing a multi-scale loss for removing distortion, thus improving effectiveness. Secondly, TMT presents a new attention module along the temporal axis to extract extra features efficiently, thus improving memory and speed. Thirdly, TMT introduces a new simulator based on the Fourier sampler, temporal correlation, and flexible kernel size, thus improving our capability to synthesize better training data. TMT outperforms state-of-the-art video restoration models, especially in generalizing from synthetic to real turbulence data. Code, videos, and datasets are available at \href{https://xg416.github.io/TMT}{https://xg416.github.io/TMT}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (67)
  1. R. C. Hardie, J. D. Power, D. A. LeMaster, D. R. Droege, S. Gladysz, and S. Bose-Pillai, “Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis,” SPIE Optical Engineering, vol. 56, no. 7, pp. 1 – 16, 2017.
  2. C. P. Lau and L. M. Lui, “Subsampled turbulence removal network,” Mathematics, Computation and Geometry of Data, vol. 1, no. 1, pp. 1–33, 2021.
  3. Z. Mao, N. Chimitt, and S. H. Chan, “Accelerating atmospheric turbulence simulation via learned phase-to-space transform,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14 759–14 768.
  4. N. Anantrasirichai, A. Achim, N. G. Kingsbury, and D. R. Bull, “Atmospheric turbulence mitigation using complex wavelet-based fusion,” IEEE Transactions on Image Processing, vol. 22, no. 6, pp. 2398–2408, June 2013.
  5. Z. Mao, N. Chimitt, and S. H. Chan, “Image reconstruction of static and dynamic scenes through anisoplanatic turbulence,” IEEE Transactions on Computational Imaging, vol. 6, pp. 1415–1428, 2020.
  6. C. P. Lau, Y. H. Lai, and L. M. Lui, “Restoration of atmospheric turbulence-distorted images via RPCA and quasiconformal maps,” Inverse Problems, vol. 35, no. 7, p. 074002, 2019.
  7. X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 157–170, Jan. 2013.
  8. M. Hirsch, S. Sra, B. Schölkopf, and S. Harmeling, “Efficient filter flow for space-variant multiframe blind deconvolution,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, pp. 607–614.
  9. R. Yasarla and V. M. Patel, “Learning to restore images degraded by atmospheric turbulence using uncertainty,” in 2021 IEEE International Conference on Image Processing (ICIP), 2021, pp. 1694–1698.
  10. N. G. Nair and V. M. Patel, “Confidence guided network for atmospheric turbulence mitigation,” in 2021 IEEE International Conference on Image Processing (ICIP), 2021, pp. 1359–1363.
  11. C. P. Lau, C. D. Castillo, and R. Chellappa, “Atfacegan: Single face semantic aware image restoration and recognition from atmospheric turbulence,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 3, no. 2, pp. 240–251, 2021.
  12. D. Jin, Y. Chen, Y. Lu, J. Chen, P. Wang, Z. Liu, S. Guo, and X. Bai, “Neutralizing the impact of atmospheric turbulence on complex scene imaging via deep learning,” Nature Machine Intelligence, vol. 3, pp. 876–884, 2021.
  13. D. L. Fried, “Probability of getting a lucky short-exposure image through turbulence,” J. Opt. Soc. Am., vol. 68, no. 12, pp. 1651–1658, Dec 1978.
  14. A. N. Kolmogorov, “The Local Structure of Turbulence in Incompressible Viscous Fluid for Very Large Reynolds’ Numbers,” Akademiia Nauk SSSR Doklady, vol. 30, pp. 301–305, 1941.
  15. M. C. Roggemann, B. M. Welsh, D. Montera, and T. A. Rhoadarmer, “Method for simulating atmospheric turbulence phase effects for multiple time slices and anisoplanatic conditions,” Applied Optics, vol. 34, no. 20, pp. 4037 – 4051, Jul. 1995.
  16. S. H. Chan and N. Chimitt, “Computational imaging through atmospheric turbulence,” Foundations and Trends® in Computer Graphics and Vision, vol. 15, no. 4, pp. 253–508, 2023.
  17. S. H. Chan, “Tilt-then-blur or blur-then-tilt? Clarifying the atmospheric turbulence model,” IEEE Signal Processing Letters, vol. 29, pp. 1833–1837, 2022.
  18. J. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am. A, vol. 7, no. 9, pp. 1598–1608, 1990.
  19. M. A. Vorontsov and G. W. Carhart, “Anisoplanatic imaging through turbulent media: image recovery by local information fusion from a set of short-exposure images,” J. Opt. Soc. Am. A, vol. 18, no. 6, pp. 1312–1324, Jun 2001.
  20. M. Shimizu, S. Yoshimura, M. Tanaka, and M. Okutomi, “Super-resolution from image sequence under influence of hot-air optical turbulence,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1–8.
  21. Y. Lou, S. H. Kang, S. Soatto, and A. L. Bertozzi, “Video stabilization of atmospheric turbulence distortion,” Inverse Problems and Imaging, vol. 7, no. 3, p. 839, 2013.
  22. O. Oreifej, X. Li, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 2, pp. 450–462, 2012.
  23. J. Gilles and S. Osher, “Wavelet burst accumulation for turbulence mitigation,” Journal of Electronic Imaging, vol. 25, no. 3, p. 033003, 2016.
  24. R. C. Hardie, M. A. Rucci, A. J. Dapore, and B. K. Karch, “Block matching and wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis,” SPIE Optical Engineering, vol. 56, no. 7, p. 071503, 2017.
  25. M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M. T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” in Atmospheric Optics: Models, Measurements, and Target-in-the-Loop Propagation III, vol. 7463.   SPIE, 2009, pp. 104–113.
  26. Y. Mao and J. Gilles, “Non rigid geometric distortions correction-application to atmospheric turbulence stabilization,” Inverse Problems & Imaging, vol. 6, no. 3, p. 531, 2012.
  27. Y. Xie, W. Zhang, D. Tao, W. Hu, Y. Qu, and H. Wang, “Removing turbulence effect via hybrid total variation and deformation-guided kernel regression,” IEEE Transactions on Image Processing, vol. 25, no. 10, pp. 4943–4958, 2016.
  28. R. Yasarla and V. M. Patel, “CNN-Based restoration of a single face image degraded by atmospheric turbulence,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 4, no. 2, pp. 222–233, 2022.
  29. Z. Mao, A. Jaiswal, Z. Wang, and S. H. Chan, “Single frame atmospheric turbulence mitigation: A benchmark study and a new physics-inspired transformer model,” in Computer Vision–ECCV.   Springer Nature Switzerland, 2022, pp. 430–446.
  30. S. N. Rai and C. Jawahar, “Removing atmospheric turbulence via deep adversarial learning,” IEEE Transactions on Image Processing, vol. 31, pp. 2633–2646, 2022.
  31. N. G. Nair, K. Mei, and V. M. Patel, “AT-DDPM: Restoring faces degraded by atmospheric turbulence using denoising diffusion probabilistic models,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 3434–3443.
  32. J.-H. Choi, H. Zhang, J.-H. Kim, C.-J. Hsieh, and J.-S. Lee, “Evaluating robustness of deep image super-resolution against adversarial attacks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019, pp. 303–311.
  33. ——, “Deep image destruction: Vulnerability of deep image-to-image models against adversarial attacks,” in 2022 26th International Conference on Pattern Recognition (ICPR).   IEEE, 2022, pp. 1287–1293.
  34. B. Y. Feng, M. Xie, and C. A. Metzler, “TurbuGAN: An adversarial learning approach to spatially-varying multiframe blind deconvolution with applications to imaging through turbulence,” IEEE Journal on Selected Areas in Information Theory, vol. 3, no. 3, pp. 543–556, 2022.
  35. N. Li, S. Thapa, C. Whyte, A. W. Reed, S. Jayasuriya, and J. Ye, “Unsupervised non-rigid image distortion removal via grid deformation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 2522–2532.
  36. S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented lagrangian method for total variation video restoration,” IEEE Transactions on Image Processing, vol. 20, no. 11, pp. 3097–3111, 2011.
  37. J. Liang, J. Cao, Y. Fan, K. Zhang, R. Ranjan, Y. Li, R. Timofte, and L. Van Gool, “VRT: A video restoration transformer,” arXiv preprint arXiv:2201.12288, 2022.
  38. K. C. Chan, S. Zhou, X. Xu, and C. C. Loy, “BasicVSR++: Improving video super-resolution with enhanced propagation and alignment,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 5972–5981.
  39. Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 17 683–17 693.
  40. J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “SwinIR: Image restoration using swin transformer,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, October 2021, pp. 1833–1844.
  41. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 5728–5739.
  42. R. C. Hardie, J. D. Power, D. A. LeMaster, D. R. Droege, S. Gladysz, and S. Bose-Pillai, “Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis,” SPIE Optical Engineering, vol. 56, no. 7, p. 071502, 2017.
  43. J. P. Bos and M. C. Roggemann, “Technique for simulating anisoplanatic image formation over long horizontal paths,” SPIE Optical Engineering, vol. 51, no. 10, p. 101704, 2012.
  44. N. Chimitt and S. H. Chan, “Simulating anisoplanatic turbulence by sampling correlated zernike coefficients,” in 2020 IEEE International Conference on Computational Photography (ICCP).   IEEE, 2020, pp. 1–12.
  45. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, 2021.
  46. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin Transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 10 012–10 022.
  47. A. Ranjan and M. J. Black, “Optical flow estimation using a spatial pyramid network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 4161–4170.
  48. J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, “Deformable convolutional networks,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017, pp. 764–773.
  49. N. Anantrasirichai, “Atmospheric turbulence removal with complex-valued convolutional neural network,” Pattern Recognition Letters, vol. 171, pp. 69–75, 2023.
  50. P. Charbonnier, L. Blanc-Feraud, G. Aubert, and M. Barlaud, “Deterministic edge-preserving regularization in computed imaging,” IEEE Transactions on Image Processing, vol. 6, no. 2, pp. 298–311, Feb. 1997.
  51. X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6848–6856.
  52. N. Chimitt, X. Zhang, Z. Mao, and S. H. Chan, “Real-time dense field phase-to-space simulation of imaging through atmospheric turbulence,” IEEE Transactions on Computational Imaging, vol. 8, pp. 1159–1169, 2022.
  53. J. Gilles and N. B. Ferrante, “Open Turbulent Image Set (OTIS),” Pattern Recognition Letters, vol. 86, pp. 38–41, 2017.
  54. “Bridging the gap between computational photography and visual recognition: 5⁢t⁢h5𝑡ℎ5th5 italic_t italic_h UG2+limit-fromUG2\text{UG2}+UG2 + prize challenge,” http://cvpr2022.ug2challenge.org/dataset22_t3.html, track 3.
  55. B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, “Places: A 10 million image database for scene recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 6, pp. 1452–1464, 2018.
  56. S. M. Safdarnejad, X. Liu, L. Udpa, B. Andrus, J. Wood, and D. Craven, “Sports videos in the wild (SVW): A video dataset for sports analysis,” in 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 1.   IEEE, 2015, pp. 1–7.
  57. K. C. Chan, S. Zhou, X. Xu, and C. C. Loy, “On the generalization of BasicVSR++ to video deblurring and denoising,” arXiv preprint arXiv:2204.05308, 2022.
  58. I. Loshchilov and F. Hutter, “SGDR: stochastic gradient descent with warm restarts,” in 5th International Conference on Learning Representations, ICLR 2017.   OpenReview.net, 2017.
  59. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
  60. M. P. Sampat, Z. Wang, S. Gupta, A. C. Bovik, and M. K. Markey, “Complex wavelet structural similarity: A new image similarity index,” IEEE Transactions on Image Processing, vol. 18, no. 11, pp. 2385–2401, 2009.
  61. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 586–595.
  62. D. Vint, G. Di Caterina, J. Soraghan, R. Lamb, and D. Humphreys, “Analysis of deep learning architectures for turbulence mitigation in long-range imagery,” in Artificial Intelligence and Machine Learning in Defense Applications II, vol. 11543.   International Society for Optics and Photonics, 2020, p. 1154303.
  63. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI.   Springer International Publishing, 2015, pp. 234–241.
  64. S. Nah, T. Hyun Kim, and K. Mu Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  65. B. Ji and A. Yao, “Multi-scale memory-based video deblurring,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 1919–1928.
  66. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 14 821–14 831.
  67. E. Repasi and R. Weiss, “Computer simulation of image degradations by atmospheric turbulence for horizontal views,” in Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXII, vol. 8014.   International Society for Optics and Photonics, 2011, p. 80140U.
Citations (22)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com