Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hyperspectral and Multispectral Image Fusion Using the Conditional Denoising Diffusion Probabilistic Model (2307.03423v1)

Published 7 Jul 2023 in eess.IV, cs.CV, and cs.LG

Abstract: Hyperspectral images (HSI) have a large amount of spectral information reflecting the characteristics of matter, while their spatial resolution is low due to the limitations of imaging technology. Complementary to this are multispectral images (MSI), e.g., RGB images, with high spatial resolution but insufficient spectral bands. Hyperspectral and multispectral image fusion is a technique for acquiring ideal images that have both high spatial and high spectral resolution cost-effectively. Many existing HSI and MSI fusion algorithms rely on known imaging degradation models, which are often not available in practice. In this paper, we propose a deep fusion method based on the conditional denoising diffusion probabilistic model, called DDPM-Fus. Specifically, the DDPM-Fus contains the forward diffusion process which gradually adds Gaussian noise to the high spatial resolution HSI (HrHSI) and another reverse denoising process which learns to predict the desired HrHSI from its noisy version conditioning on the corresponding high spatial resolution MSI (HrMSI) and low spatial resolution HSI (LrHSI). Once the training is completes, the proposed DDPM-Fus implements the reverse process on the test HrMSI and LrHSI to generate the fused HrHSI. Experiments conducted on one indoor and two remote sensing datasets show the superiority of the proposed model when compared with other advanced deep learningbased fusion methods. The codes of this work will be opensourced at this address: https://github.com/shuaikaishi/DDPMFus for reproducibility.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Z. Pan, G. Healey, M. Prasad, and B. Tromberg, “Face recognition in hyperspectral images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 12, pp. 1552–1560, 2003.
  2. L. Yan, M. Zhao, X. Wang, Y. Zhang, and J. Chen, “Object detection in hyperspectral images,” IEEE Signal Process. Lett., vol. 28, pp. 508–512, 2021.
  3. J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag., vol. 1, no. 2, pp. 6–36, 2013.
  4. K. Zhu, Z. Sun, F. Zhao, T. Yang, Z. Tian, J. Lai, B. Long, and S. Li, “Remotely sensed canopy resistance model for analyzing the stomatal behavior of environmentally-stressed winter wheat,” ISPRS J. Photogramm. Remote Sens., vol. 168, pp. 197–207, 2020.
  5. R. Heylen, M. Parente, and P. Gader, “A review of nonlinear hyperspectral unmixing methods,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 7, no. 6, pp. 1844–1868, 2014.
  6. N. Yokoya, C. Grohnfeldt, and J. Chanussot, “Hyperspectral and multispectral data fusion: A comparative review of the recent literature,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 2, pp. 29–56, 2017.
  7. T. Wang, G. Yan, H. Ren, and X. Mu, “Improved methods for spectral calibration of on-orbit imaging spectrometers,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 11, pp. 3924–3931, 2010.
  8. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proc. Annu. Conf. Neural Inf. Process. Syst., vol. 30, 2017.
  9. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” in Adv. Neural Infor. Process. Syst. (NeurIPS), vol. 33, 2020, pp. 6840–6851.
  10. P. Dhariwal and A. Nichol, “Diffusion models beat gans on image synthesis,” in Proc. Annu. Conf. Neural Inf. Process. Syst., vol. 34, 2021, pp. 8780–8794.
  11. N. Chen, Y. Zhang, H. Zen, R. J. Weiss, M. Norouzi, and W. Chan, “Wavegrad: Estimating gradients for waveform generation,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2020.
  12. C. Saharia, J. Ho, W. Chan, T. Salimans, D. J. Fleet, and M. Norouzi, “Image super-resolution via iterative refinement,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 1–14, 2022.
  13. R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2022.
  14. C. Saharia, W. Chan, H. Chang, C. Lee, J. Ho, T. Salimans, D. Fleet, and M. Norouzi, “Palette: Image-to-image diffusion models,” in ACM SIGGRAPH 2022 Conference Proceedings, 2022, pp. 1–10.
  15. L. Loncan, L. B. de Almeida, J. M. Bioucas-Dias, X. Briottet, J. Chanussot, N. Dobigeon, S. Fabre, W. Liao, G. A. Licciardi, M. Simões, J.-Y. Tourneret, M. A. Veganzones, G. Vivone, Q. Wei, and N. Yokoya, “Hyperspectral pansharpening: A review,” IEEE Geosci. Remote Sens. Mag., vol. 3, no. 3, pp. 27–46, 2015.
  16. B. Aiazzi, S. Baronti, and M. Selva, “Improving component substitution pansharpening through multivariate regression of MS+Pan data,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 10, pp. 3230–3239, 2007.
  17. M. Selva, B. Aiazzi, F. Butera, L. Chiarantini, and S. Baronti, “Hyper-sharpening: A first approach on sim-ga data,” IEEE J. Sel. Topics Appl. Earth Observ. in Remote Sens., vol. 8, no. 6, pp. 3008–3024, 2015.
  18. N. Yokoya, T. Yairi, and A. Iwasaki, “Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 2, pp. 528–537, 2012.
  19. M. Simões, J. Bioucas‐Dias, L. B. Almeida, and J. Chanussot, “A convex formulation for hyperspectral image superresolution via subspace-based regularization,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 6, pp. 3373–3388, 2015.
  20. W. Dong, F. Fu, G. Shi, X. Cao, J. Wu, G. Li, and X. Li, “Hyperspectral image super-resolution via non-negative structured sparse representation,” IEEE Trans. Image Process., vol. 25, no. 5, pp. 2337–2352, 2016.
  21. S. Li, R. Dian, L. Fang, and J. M. Bioucas-Dias, “Fusing hyperspectral and multispectral images via coupled sparse tensor factorization,” IEEE Trans. Image Process., vol. 27, no. 8, pp. 4118–4130, 2018.
  22. Y. Qu, H. Qi, and C. Kwan, “Unsupervised sparse Dirichlet-Net for hyperspectral image super-resolution,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2018, pp. 2511–2520.
  23. Z. Wang, B. Chen, H. Zhang, and H. Liu, “Unsupervised hyperspectral and multispectral images fusion based on nonlinear variational probabilistic generative model,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–15, 2020.
  24. T. Uezato, D. Hong, N. Yokoya, and W. He, “Guided deep decoder: Unsupervised image pair fusion,” in Proceedings of the European Conference on Computer Vision(ECCV).   Cham: Springer International Publishing, 2020, pp. 87–102.
  25. V. Lempitsky, A. Vedaldi, and D. Ulyanov, “Deep image prior,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, p. 9446–9454.
  26. R. Dian, S. Li, B. Sun, and A. Guo, “Recent advances and new guidelines on hyperspectral and multispectral image fusion,” Information Fusion, vol. 69, pp. 40–51, 2021.
  27. J. Yang, X. Fu, Y. Hu, Y. Huang, X. Ding, and J. Paisley, “PanNet: A deep network architecture for pan-sharpening,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct 2017.
  28. X. Wang, J. Chen, Q. Wei, and C. Richard, “Hyperspectral image super-resolution via deep prior regularization with parameter estimation,” IEEE Trans. Circuits Syst, Video Technol, pp. 1–1, 2021.
  29. J.-F. Hu, T.-Z. Huang, L.-J. Deng, T.-X. Jiang, G. Vivone, and J. Chanussot, “Hyperspectral image super-resolution via deep spatiospectral attention convolutional neural networks,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–15, 2021.
  30. X. Zhang, W. Huang, Q. Wang, and X. Li, “SSR-NET: Spatial–spectral reconstruction network for hyperspectral and multispectral image fusion,” ”IEEE Trans. Geosci. Remote Sens.”, vol. 59, no. 7, pp. 5953–5965, 2021.
  31. Q. Xie, M. Zhou, Q. Zhao, D. Meng, W. Zuo, and Z. Xu, “Multispectral and hyperspectral image fusion by MS/HS fusion net,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2019, p. 1585–1594.
  32. W. Wang, X. Fu, W. Zeng, L. Sun, R. Zhan, Y. Huang, and X. Ding, “Enhanced deep blind hyperspectral image fusion,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–11, 2021.
  33. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  34. S. Jiaming, M. Chenlin, and E. Stefano, “Denoising diffusion implicit models,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2021.
  35. F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. Image Process., vol. 19, no. 9, pp. 2241–2253, 2010.
  36. N. Yokoya and A. Iwasaki, “Airborne hyperspectral data over Chikusei,” Space Application Laboratory, University of Tokyo, Japan, Tech. Rep. SAL-2016-05-27, May 2016.
  37. Z. Wang, B. Chen, R. Lu, H. Zhang, H. Liu, and P. K. Varshney, “Fusionnet: An unsupervised convolutional variational network for hyperspectral and multispectral image fusion,” IEEE Trans. Image Process., vol. 29, pp. 7565–7577, 2020.
  38. I. Loshchilov and F. Hutter, “SGDR: Stochastic gradient descent with warm restarts,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2017.
  39. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Shuaikai Shi (4 papers)
  2. Lijun Zhang (239 papers)
  3. Jie Chen (602 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.