Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HyperColorization: Propagating spatially sparse noisy spectral clues for reconstructing hyperspectral images (2403.11935v1)

Published 18 Mar 2024 in cs.CV and eess.IV

Abstract: Hyperspectral cameras face challenging spatial-spectral resolution trade-offs and are more affected by shot noise than RGB photos taken over the same total exposure time. Here, we present a colorization algorithm to reconstruct hyperspectral images from a grayscale guide image and spatially sparse spectral clues. We demonstrate that our algorithm generalizes to varying spectral dimensions for hyperspectral images, and show that colorizing in a low-rank space reduces compute time and the impact of shot noise. To enhance robustness, we incorporate guided sampling, edge-aware filtering, and dimensionality estimation techniques. Our method surpasses previous algorithms in various performance metrics, including SSIM, PSNR, GFC, and EMD, which we analyze as metrics for characterizing hyperspectral image quality. Collectively, these findings provide a promising avenue for overcoming the time-space-wavelength resolution trade-off by reconstructing a dense hyperspectral image from samples obtained by whisk or push broom scanners, as well as hybrid spatial-spectral computational imaging systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. L. Yan, M. Zhao, X. Wang, et al., “Object detection in hyperspectral images,” \JournalTitleIEEE Signal Processing Letters 28, 508–512 (2021).
  2. S. Li, W. Song, L. Fang, et al., “Deep learning for hyperspectral image classification: An overview,” \JournalTitleIEEE Transactions on Geoscience and Remote Sensing 57, 6690–6709 (2019).
  3. Y.-Z. Feng and D.-W. Sun, “Application of hyperspectral imaging in food safety inspection and control: a review,” \JournalTitleCritical reviews in food science and nutrition 52, 1039–1058 (2012).
  4. B. Lu, P. D. Dao, J. Liu, et al., “Recent advances of hyperspectral imaging technology and applications in agriculture,” \JournalTitleRemote Sensing 12, 2659 (2020).
  5. C. Cucci and A. Casini, “Hyperspectral imaging for artworks investigation,” in Data handling in science and technology, vol. 32 (Elsevier, 2019), pp. 583–604.
  6. S. Peyghambari and Y. Zhang, “Hyperspectral remote sensing in lithological mapping, mineral exploration, and environmental geology: an updated review,” \JournalTitleJournal of Applied Remote Sensing 15, 031501–031501 (2021).
  7. E. Merenyi, “Hyperspectral image analysis in planetary science and astronomy,” in American Astronomical Society Meeting Abstracts# 223, vol. 223 (2014), pp. 203–04.
  8. S. V. Panasyuk, S. Yang, D. V. Faller, et al., “Medical hyperspectral imaging to facilitate residual tumor identification during surgery,” \JournalTitleCancer biology & therapy 6, 439–446 (2007).
  9. R. I. Hartley and R. Gupta, “Linear pushbroom cameras,” in Computer Vision—ECCV’94: Third European Conference on Computer Vision Stockholm, Sweden, May 2–6, 1994 Proceedings, Volume I 3, (Springer, 1994), pp. 555–566.
  10. X. Wang, J.-B. Thomas, J. Y. Hardeberg, and P. Gouton, “Multispectral imaging: narrow or wide band filters?” \JournalTitleInternational Color Association (2014).
  11. W. Zhang, H. Song, X. He, et al., “Deeply learned broadband encoding stochastic hyperspectral imaging,” \JournalTitleLight: Science & Applications 10, 108 (2021).
  12. S. Feng, Z. Wang, X. Cheng, and X. Dun, “Superposition fabry–perot filter array for a computational hyperspectral camera,” \JournalTitleOptics Letters 48, 1156–1159 (2023).
  13. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” \JournalTitleApplied optics 47, B44–B51 (2008).
  14. M. E. Gehm, R. John, D. J. Brady, et al., “Single-shot compressive spectral imaging with a dual-disperser architecture,” \JournalTitleOptics express 15, 14013–14027 (2007).
  15. V. Saragadam, M. DeZeeuw, R. G. Baraniuk, et al., “Sassi—super-pixelated adaptive spatio-spectral imaging,” \JournalTitleIEEE Transactions on Pattern Analysis and Machine Intelligence 43, 2233–2244 (2021).
  16. X. Cao, X. Tong, Q. Dai, and S. Lin, “High resolution multispectral video capture with a hybrid camera system,” in CVPR 2011, (IEEE, 2011), pp. 297–304.
  17. H. Xie, Z. Zhao, J. Han, et al., “Dual camera snapshot high-resolution-hyperspectral imaging system with parallel joint optimization via physics-informed learning,” \JournalTitleOptics Express 31, 14617–14639 (2023).
  18. M. Bonnard, “Méliès’s voyage restoration: Or, the risk of being stuck in the digital reconstruction,” \JournalTitleMoving Image: The Journal of the Association of Moving Image Archivists 16, 139–147 (2016).
  19. W. Markle and C. Mitchell, “Method of, and apparatus for, coloring a black and white video signal,” (1989). US Patent 4,862,256.
  20. W. Markle and B. Hunt, “Coloring a black and white signal using motion detection,” (1988). US Patent 4,755,870.
  21. W. Markle and C. Mitchell, “Method of, and apparatus for, modifying luminance levels of a black and white video signal,” (1987). US Patent 4,710,805.
  22. A. Levin, D. Lischinski, and Y. Weiss, “Colorization using optimization,” \JournalTitleACM Transactions on Graphics 23, 689–694 (2004).
  23. R. Ironi, D. Cohen-Or, and D. Lischinski, “Colorization by example.” \JournalTitleRendering techniques 29, 201–210 (2005).
  24. L. Yatziv and G. Sapiro, “Fast image and video colorization using chrominance blending,” \JournalTitleIEEE transactions on image processing 15, 1120–1129 (2006).
  25. Z. Cheng, Q. Yang, and B. Sheng, “Deep colorization,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2015).
  26. G. Larsson, M. Maire, and G. Shakhnarovich, “Learning representations for automatic colorization,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, (Springer, 2016), pp. 577–593.
  27. S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Let there be color! joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification,” \JournalTitleACM Transactions on Graphics (ToG) 35, 1–11 (2016).
  28. M. Kumar, D. Weissenborn, and N. Kalchbrenner, “Colorization transformer,” \JournalTitlearXiv preprint arXiv:2102.04432 (2021).
  29. Z. Wan, B. Zhang, D. Chen, and J. Liao, “Bringing old films back to life,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), pp. 17694–17703.
  30. D. Brainard, “Hyperspectral image data, bear and fruit grayb,” (1998). http://color.psych.upenn.edu//hyperspectral/.
  31. B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part VII 14, (Springer, 2016), pp. 19–34. https://icvl.cs.bgu.ac.il/hyperspectral/.
  32. A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in CVPR 2011, (IEEE, 2011), pp. 193–200. https://vision.seas.harvard.edu/hyperspec/.
  33. B. Rasti, J. R. Sveinsson, M. O. Ulfarsson, and J. A. Benediktsson, “Hyperspectral image denoising using a new linear model and sparse regularization,” in 2013 IEEE International Geoscience and Remote Sensing Symposium-IGARSS, (IEEE, 2013), pp. 457–460.
  34. C. Li, T. Sun, K. F. Kelly, and Y. Zhang, “A compressive sensing and unmixing scheme for hyperspectral data processing,” \JournalTitleIEEE Transactions on Image Processing 21, 1200–1210 (2011).
  35. T.-W. Lee, T. Wachtler, and T. J. Sejnowski, “The spectral independent components of natural scenes,” in International Workshop on Biologically Motivated Computer Vision, (Springer, 2000), pp. 527–534.
  36. F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” \JournalTitleIEEE transactions on image processing 19, 2241–2253 (2010). https://www.cs.columbia.edu/CAVE/databases/multispectral/.
  37. V. Satopaa, J. Albrecht, D. Irwin, and B. Raghavan, “Finding a "kneedle" in a haystack: Detecting knee points in system behavior,” in 2011 31st international conference on distributed computing systems workshops, (IEEE, 2011), pp. 166–171.
  38. J. Shi et al., “Good features to track,” in 1994 Proceedings of IEEE conference on computer vision and pattern recognition, (IEEE, 1994), pp. 593–600.
  39. I. Choi, D. S. Jeon, G. Nam, et al., “High-quality hyperspectral reconstruction using a spectral prior,” \JournalTitleACM Transactions on Graphics 36, 1–13 (2017). https://vclab.kaist.ac.kr/siggraphasia2017p1/kaistdataset.html.
  40. European Committee for Standardiziation, “En 12464-1,” (2002).
  41. FLIR, “Flir blackfly monochrome camera bfs-u3-244s8 datasheet,” (2021).
  42. R. Shrestha, R. Pillay, S. George, and J. Y. Hardeberg, “Quality evaluation in spectral imaging–quality factors and metrics,” \JournalTitleJournal of the International Colour Association 12, 22–35 (2014).
  43. A. Ramdas, N. García Trillos, and M. Cuturi, “On wasserstein two-sample testing and related families of nonparametric tests,” \JournalTitleEntropy 19, 47 (2017).
  44. M. K. Aydin, Y.-C. Hung, Q. Guo, and E. Alexander, “Hypercolorization,” https://mehmetkeremaydin.github.io/hypercolorization/ (2023).

Summary

We haven't generated a summary for this paper yet.