Simulating Nighttime Visible Satellite Imagery of Tropical Cyclones Using Conditional Generative Adversarial Networks (2401.11679v4)
Abstract: Visible (VIS) imagery is important for monitoring Tropical Cyclones (TCs) but is unavailable at night. This study presents a Conditional Generative Adversarial Networks (CGAN) model to generate nighttime VIS imagery with significantly enhanced accuracy and spatial resolution. Our method offers three key improvements compared to existing models. First, we replaced the L1 loss in the pix2pix framework with the Structural Similarity Index Measure (SSIM) loss, which significantly reduced image blurriness. Second, we selected multispectral infrared (IR) bands as input based on a thorough examination of their spectral properties, providing essential physical information for accurate simulation. Third, we incorporated the direction parameters of the sun and the satellite, which addressed the dependence of VIS images on sunlight directions and enabled a much larger training set from continuous daytime data. The model was trained and validated using data from the Advanced Himawari Imager (AHI) in the daytime, achieving statistical results of SSIM = 0.923 and Root Mean Square Error (RMSE) = 0.0299, which significantly surpasses existing models. We also performed a cross-satellite nighttime model validation using the Day/Night Band (DNB) of the Visible/Infrared Imager Radiometer Suite (VIIRS), which yields outstanding results compared to existing models. Our model is operationally applied to generate accurate VIS imagery with arbitrary virtual sunlight directions, significantly contributing to the nighttime monitoring of various meteorological phenomena.
- K. Bessho, K. Date, M. Hayashi, A. Ikeda, T. Imai, H. Inoue, Y. Kumagai, T. Miyakawa, H. Murata, T. Ohno et al., “An introduction to himawari-8/9—japan’s new-generation geostationary meteorological satellites,” Journal of the Meteorological Society of Japan. Ser. II, vol. 94, no. 2, pp. 151–183, 2016.
- T. J. Schmit, M. M. Gunshor, W. P. Menzel, J. J. Gurka, J. Li, and A. S. Bachmeier, “Introducing the next-generation advanced baseline imager on goes-r,” Bulletin of the American Meteorological Society, vol. 86, no. 8, pp. 1079–1096, 2005.
- D. Kim, M. Gu, T.-H. Oh, E.-K. Kim, and H.-J. Yang, “Introduction of the advanced meteorological imager of geo-kompsat-2a: In-orbit tests and performance validation,” Remote Sensing, vol. 13, no. 7, p. 1303, 2021.
- J. Yang, Z. Zhang, C. Wei, F. Lu, and Q. Guo, “Introducing the new generation of chinese geostationary weather satellites, fengyun-4,” Bulletin of the American Meteorological Society, vol. 98, no. 8, pp. 1637–1658, 2017.
- L. Liao, S. Weiss, S. Mills, and B. Hauss, “Suomi npp viirs day-night band on-orbit performance,” Journal of Geophysical Research: Atmospheres, vol. 118, no. 22, pp. 12–705, 2013.
- K. O’Shea and R. Nash, “An introduction to convolutional neural networks,” arXiv preprint arXiv:1511.08458, 2015.
- M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014.
- W. Samek, G. Montavon, S. Lapuschkin, C. J. Anders, and K.-R. Müller, “Explaining deep neural networks and beyond: A review of methods and applications,” Proceedings of the IEEE, vol. 109, no. 3, pp. 247–278, 2021.
- K. Kim, J.-H. Kim, Y.-J. Moon, E. Park, G. Shin, T. Kim, Y. Kim, and S. Hong, “Nighttime reflectance generation in the visible band of satellites,” Remote Sensing, vol. 11, no. 18, p. 2087, 2019.
- J.-H. Kim, S. Ryu, J. Jeong, D. So, H.-J. Ban, and S. Hong, “Impact of satellite sounding data on virtual visible imagery generation using conditional generative adversarial network,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 4532–4541, 2020.
- K.-H. Han, J.-C. Jang, S. Ryu, E.-H. Sohn, and S. Hong, “Hypothetical visible bands of advanced meteorological imager onboard the geostationary korea multi-purpose satellite-2a using data-to-data translation,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 8378–8388, 2022.
- W. Cheng, Q. Li, Z. Wang, W. Zhang, and F. Huang, “Creating synthetic night-time visible-light meteorological satellite images using the gan method,” Remote Sensing Letters, vol. 13, no. 7, pp. 738–745, 2022.
- P. Harder, W. Jones, R. Lguensat, S. Bouabid, J. Fulton, D. Quesada-Chacón, A. Marcolongo, S. Stefanović, Y. Rao, P. Manshausen et al., “Nightvision: generating nighttime satellite imagery from infra-red observations,” arXiv preprint arXiv:2011.07017, 2020.
- J. Yan, J. Qu, H. An, and H. Zhang, “Simulation of visible light at night from infrared measurements using deep learning technique,” Geocarto International, no. just-accepted, pp. 1–13, 2023.
- A. Verhoef, M. Portabella, and A. Stoffelen, “High-resolution ascat scatterometer winds near the coast,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 7, pp. 2481–2487, 2012.
- P. Minnis, D. P. Garber, D. F. Young, R. F. Arduini, and Y. Takano, “Parameterizations of reflectance and effective emittance for satellite remote sensing of cloud properties,” Journal of the atmospheric sciences, vol. 55, no. 22, pp. 3313–3339, 1998.
- W. P. Menzel, R. A. Frey, H. Zhang, D. P. Wylie, C. C. Moeller, R. E. Holz, B. Maddux, B. A. Baum, K. I. Strabala, and L. E. Gumley, “Modis global cloud-top pressure and amount estimation: Algorithm description and results,” Journal of Applied Meteorology and Climatology, vol. 47, no. 4, pp. 1175–1198, 2008.
- R. Gupta and S. J. Nanda, “Cloud detection in satellite images with classical and deep neural network approach: A review,” Multimedia Tools and Applications, vol. 81, no. 22, pp. 31 847–31 880, 2022.
- S. E. Hannon, L. L. Strow, and W. W. McMillan, “Atmospheric infrared fast transmittance models: A comparison of two approaches,” in Optical Spectroscopic Techniques and Instrumentation for Atmospheric and Space Research II, vol. 2830. SPIE, 1996, pp. 94–105.
- S. G. Warren and R. E. Brandt, “Optical constants of ice from the ultraviolet to the microwave: A revised compilation,” Journal of Geophysical Research: Atmospheres, vol. 113, no. D14, 2008.
- G. M. Hale and M. R. Querry, “Optical constants of water in the 200-nm to 200-μ𝜇\muitalic_μm wavelength region,” Applied optics, vol. 12, no. 3, pp. 555–563, 1973.
- R. Richter and D. Schläpfer, “Atmospheric/topographic correction for satellite imagery,” DLR report DLR-IB, vol. 565, 2005.
- P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125–1134.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 2015, pp. 234–241.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
- S. Zinke, “A simplified high and near-constant contrast approach for the display of viirs day/night band imagery,” International Journal of Remote Sensing, vol. 38, no. 19, pp. 5374–5387, 2017.
- M. Raspaud, D. Hoese, P. Lahtinen, G. Holl, S. Proud, S. Finkensieper, A. Dybbroe, A. Meraner, J. Strandgren, J. Feltz, S. Joro, X. Zhang, BENR0, G. Ghiggi, W. Roberts, Youva, P. de Buyl, L. Ørum Rasmussen, yukaribbba, mherbertson, J. H. B. Méndez, Y. Zhu, rdaruwala, seenno, T. Jasmin, BengtRydberg, Isotr0py, C. Kliche, and T. Barnie, “pytroll/satpy: Version 0.46.0 (2023/12/18),” Dec. 2023. [Online]. Available: https://doi.org/10.5281/zenodo.10400258
- Y.-L. Chen and C.-C. Wu, “On the two types of tropical cyclone eye formation: Clearing formation and banding formation,” Monthly Weather Review, vol. 150, no. 6, pp. 1457–1473, 2022.
- X. Wang, L. Xie, C. Dong, and Y. Shan, “Real-esrgan: Training real-world blind super-resolution with pure synthetic data,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 1905–1914.
- C. K. Liang, S. Mills, B. I. Hauss, and S. D. Miller, “Improved viirs day/night band imagery with near-constant contrast,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 11, pp. 6964–6971, 2014.
- J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
- L. Yang, Z. Zhang, Y. Song, S. Hong, R. Xu, Y. Zhao, W. Zhang, B. Cui, and M.-H. Yang, “Diffusion models: A comprehensive survey of methods and applications,” ACM Computing Surveys, vol. 56, no. 4, pp. 1–39, 2023.
- C. Saharia, W. Chan, H. Chang, C. Lee, J. Ho, T. Salimans, D. Fleet, and M. Norouzi, “Palette: Image-to-image diffusion models,” in ACM SIGGRAPH 2022 Conference Proceedings, 2022, pp. 1–10.
- B. Li, K. Xue, B. Liu, and Y.-K. Lai, “Bbdm: Image-to-image translation with brownian bridge diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 1952–1961.