Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Hyperspectral and Multispectral Images Fusion Based on the Cycle Consistency (2307.03413v1)

Published 7 Jul 2023 in cs.CV and eess.IV

Abstract: Hyperspectral images (HSI) with abundant spectral information reflected materials property usually perform low spatial resolution due to the hardware limits. Meanwhile, multispectral images (MSI), e.g., RGB images, have a high spatial resolution but deficient spectral signatures. Hyperspectral and multispectral image fusion can be cost-effective and efficient for acquiring both high spatial resolution and high spectral resolution images. Many of the conventional HSI and MSI fusion algorithms rely on known spatial degradation parameters, i.e., point spread function, spectral degradation parameters, spectral response function, or both of them. Another class of deep learning-based models relies on the ground truth of high spatial resolution HSI and needs large amounts of paired training images when working in a supervised manner. Both of these models are limited in practical fusion scenarios. In this paper, we propose an unsupervised HSI and MSI fusion model based on the cycle consistency, called CycFusion. The CycFusion learns the domain transformation between low spatial resolution HSI (LrHSI) and high spatial resolution MSI (HrMSI), and the desired high spatial resolution HSI (HrHSI) are considered to be intermediate feature maps in the transformation networks. The CycFusion can be trained with the objective functions of marginal matching in single transform and cycle consistency in double transforms. Moreover, the estimated PSF and SRF are embedded in the model as the pre-training weights, which further enhances the practicality of our proposed model. Experiments conducted on several datasets show that our proposed model outperforms all compared unsupervised fusion methods. The codes of this paper will be available at this address: https: //github.com/shuaikaishi/CycFusion for reproducibility.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. L. Yan, M. Zhao, X. Wang, Y. Zhang, and J. Chen, “Object detection in hyperspectral images,” IEEE Signal Process. Lett., vol. 28, pp. 508–512, 2021.
  2. Z. Pan, G. Healey, M. Prasad, and B. Tromberg, “Face recognition in hyperspectral images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 12, pp. 1552–1560, 2003.
  3. J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag., vol. 1, no. 2, pp. 6–36, 2013.
  4. G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt., vol. 19, no. 1, p. 010901, 2014.
  5. K. Zhu, Z. Sun, F. Zhao, T. Yang, Z. Tian, J. Lai, B. Long, and S. Li, “Remotely sensed canopy resistance model for analyzing the stomatal behavior of environmentally-stressed winter wheat,” ISPRS J. Photogramm. Remote Sens., vol. 168, pp. 197–207, 2020.
  6. K. T. Higgins, “Five new technologies for inspection,” Food Process., vol. 74, no. 5, pp. 81–82,84, 2013.
  7. R. Heylen, M. Parente, and P. Gader, “A review of nonlinear hyperspectral unmixing methods,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 7, no. 6, pp. 1844–1868, 2014.
  8. N. Yokoya, C. Grohnfeldt, and J. Chanussot, “Hyperspectral and multispectral data fusion: A comparative review of the recent literature,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 2, pp. 29–56, 2017.
  9. T. Wang, G. Yan, H. Ren, and X. Mu, “Improved methods for spectral calibration of on-orbit imaging spectrometers,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 11, pp. 3924–3931, 2010.
  10. J. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Los Alamitos, CA, USA, OCT. 2017, pp. 2242–2251.
  11. H. You, Y. Cheng, T. Cheng, C. Li, and P. Zhou, “Bayesian cycle-consistent generative adversarial networks via marginalizing latent sampling,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 10, pp. 4389–4403, 2021.
  12. H. Sun, R. Wang, K. Chen, M. Utiyama, E. Sumita, and T. Zhao, “Unsupervised neural machine translation with cross-lingual language representation agreement,” IEEE/ACM Trans. on Audio, Speech, and Lang. Process., vol. 28, pp. 1170–1182, 2020.
  13. X. Zhong, Y. Wang, A. Cai, N. Liang, L. Li, and B. Yan, “Dual-energy ct image super-resolution via generative adversarial network,” in Inter. Conf. on Artif. Intell. and Electromechan.l Autom.(AIEA), 2021, pp. 343–347.
  14. A. Mehri and A. D. Sappa, “Colorizing near infrared images through a cyclic adversarial approach of unpaired samples,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2019, pp. 971–979.
  15. M. Simões, J. Bioucas‐Dias, L. B. Almeida, and J. Chanussot, “A convex formulation for hyperspectral image superresolution via subspace-based regularization,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 6, pp. 3373–3388, 2015.
  16. L. Loncan, L. B. de Almeida, J. M. Bioucas-Dias, X. Briottet, J. Chanussot, N. Dobigeon, S. Fabre, W. Liao, G. A. Licciardi, M. Simões, J.-Y. Tourneret, M. A. Veganzones, G. Vivone, Q. Wei, and N. Yokoya, “Hyperspectral pansharpening: A review,” IEEE Geosci. Remote Sens. Mag., vol. 3, no. 3, pp. 27–46, 2015.
  17. B. Aiazzi, S. Baronti, and M. Selva, “Improving component substitution pansharpening through multivariate regression of MS+Pan data,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 10, pp. 3230–3239, 2007.
  18. B. Aiazzi, L. Alparone, S. Baronti, A. Garzelli, and M. Selva, “Mtf-tailored multiscale fusion of high-resolution ms and pan imagery,” Photogramm. Eng. Remote Sens., vol. 72, no. 5, pp. 591–596, 2015.
  19. M. Selva, B. Aiazzi, F. Butera, L. Chiarantini, and S. Baronti, “Hyper-sharpening: A first approach on sim-ga data,” IEEE J. Sel. Topics Appl. Earth Observ. in Remote Sens., vol. 8, no. 6, pp. 3008–3024, 2015.
  20. N. Yokoya, T. Yairi, and A. Iwasaki, “Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 2, pp. 528–537, 2012.
  21. D. Hong, W. He, N. Yokoya, J. Yao, L. Gao, L. Zhang, J. Chanussot, and X. Zhu, “Interpretable hyperspectral artificial intelligence: When nonconvex modeling meets hyperspectral remote sensing,” IEEE Geosci. Remote Sens. Mag., vol. 9, no. 2, pp. 52–87, 2021.
  22. Q. Wei, J. Bioucas-Dias, N. Dobigeon, and J.-Y. Tourneret, “Hyperspectral and multispectral image fusion based on a sparse representation,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 7, pp. 3658–3668, 2015.
  23. W. Dong, F. Fu, G. Shi, X. Cao, J. Wu, G. Li, and X. Li, “Hyperspectral image super-resolution via non-negative structured sparse representation,” IEEE Trans. Image Process., vol. 25, no. 5, pp. 2337–2352, 2016.
  24. R. Dian, S. Li, and X. Kang, “Regularizing hyperspectral and multispectral image fusion by cnn denoiser,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 3, pp. 1124–1135, 2021.
  25. Q. Wei, N. Dobigeon, and J.-Y. Tourneret, “Fast fusion of multi-band images based on solving a sylvester equation,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 4109–4121, 2015.
  26. S. Li, R. Dian, L. Fang, and J. M. Bioucas-Dias, “Fusing hyperspectral and multispectral images via coupled sparse tensor factorization,” IEEE Trans. Image Process., vol. 27, no. 8, pp. 4118–4130, 2018.
  27. R. Dian, S. Li, B. Sun, and A. Guo, “Recent advances and new guidelines on hyperspectral and multispectral image fusion,” Information Fusion, vol. 69, pp. 40–51, 2021.
  28. X. Wang, J. Chen, Q. Wei, and C. Richard, “Hyperspectral image super-resolution via deep prior regularization with parameter estimation,” IEEE Trans. Circuits Syst, Video Technol, pp. 1–1, 2021.
  29. Q. Xie, M. Zhou, Q. Zhao, D. Meng, W. Zuo, and Z. Xu, “Multispectral and hyperspectral image fusion by MS/HS fusion net,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2019, p. 1585–1594.
  30. J.-F. Hu, T.-Z. Huang, L.-J. Deng, T.-X. Jiang, G. Vivone, and J. Chanussot, “Hyperspectral image super-resolution via deep spatiospectral attention convolutional neural networks,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–15, 2021.
  31. W. Wang, X. Fu, W. Zeng, L. Sun, R. Zhan, Y. Huang, and X. Ding, “Enhanced deep blind hyperspectral image fusion,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–11, 2021.
  32. Y. Qu, H. Qi, and C. Kwan, “Unsupervised sparse Dirichlet-Net for hyperspectral image super-resolution,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2018, pp. 2511–2520.
  33. Z. Wang, B. Chen, H. Zhang, and H. Liu, “Unsupervised hyperspectral and multispectral images fusion based on nonlinear variational probabilistic generative model,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–15, 2020.
  34. Z. Wang, B. Chen, R. Lu, H. Zhang, H. Liu, and P. K. Varshney, “Fusionnet: An unsupervised convolutional variational network for hyperspectral and multispectral image fusion,” IEEE Trans. Image Process., vol. 29, pp. 7565–7577, 2020.
  35. T. Uezato, D. Hong, N. Yokoya, and W. He, “Guided deep decoder: Unsupervised image pair fusion,” in Proceedings of the European Conference on Computer Vision(ECCV).   Cham: Springer International Publishing, 2020, pp. 87–102.
  36. J. Yao, D. Hong, J. Chanussot, D. Meng, X. Zhu, and Z. Xu, “Cross-attention in coupled unmixing nets for unsupervised hyperspectral super-resolution,” in Proceedings of the European Conference on Computer Vision(ECCV).   Cham: Springer International Publishing, 2020, pp. 208–224.
  37. F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 1800–1807.
  38. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.08022, 2016.
  39. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1026–1034.
  40. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  41. F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. Image Process., vol. 19, no. 9, pp. 2241–2253, 2010.
  42. N. Yokoya and A. Iwasaki, “Airborne hyperspectral data over Chikusei,” Space Application Laboratory, University of Tokyo, Japan, Tech. Rep. SAL-2016-05-27, May 2016.
  43. L. N. Smith and N. Topin, “Super-convergence: Very fast training of neural networks using large learning rates,” arXiv preprint arXiv:1708.07120, 2017.
  44. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.

Summary

We haven't generated a summary for this paper yet.