Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Spectral Sensitivity Estimation Without a Camera (2304.11549v2)

Published 23 Apr 2023 in eess.IV and cs.CV

Abstract: A number of problems in computer vision and related fields would be mitigated if camera spectral sensitivities were known. As consumer cameras are not designed for high-precision visual tasks, manufacturers do not disclose spectral sensitivities. Their estimation requires a costly optical setup, which triggered researchers to come up with numerous indirect methods that aim to lower cost and complexity by using color targets. However, the use of color targets gives rise to new complications that make the estimation more difficult, and consequently, there currently exists no simple, low-cost, robust go-to method for spectral sensitivity estimation. Furthermore, even if not limited by hardware or cost, researchers frequently work with imagery from multiple cameras that they do not have in their possession. To provide a practical solution to this problem, we propose a framework for spectral sensitivity estimation that not only does not require any hardware, but also does not require physical access to the camera itself. Similar to other work, we formulate an optimization problem that minimizes a two-term objective function: a camera-specific term from a system of equations, and a universal term that bounds the solution space. Different than other work, we use publicly available high-quality calibration data to construct both terms. We use the colorimetric mapping matrices provided by the Adobe DNG Converter to formulate the camera-specific system of equations, and constrain the solutions using an autoencoder trained on a database of ground-truth curves. On average, we achieve reconstruction errors as low as those that can arise due to manufacturing imperfections between two copies of the same camera. We provide our code and predicted sensitivities for 1,000+ cameras, and discuss which tasks can become trivial when camera responses are available.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (85)
  1. O. Burggraaff, N. Schmidt, J. Zamorano, K. Pauly, S. Pascual, C. Tapia, E. Spyrakos, and F. Snik, “Standardized spectral and radiometric calibration of consumer cameras,” Optics express, vol. 27, no. 14, pp. 19 075–19 101, 2019.
  2. D. Akkaynak, T. Treibitz, B. Xiao, U. A. Gürkan, J. J. Allen, U. Demirci, and R. T. Hanlon, “Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration,” JOSA A, vol. 31, no. 2, pp. 312–321, 2014.
  3. S. J. Kim, H. T. Lin, Z. Lu, S. Süsstrunk, S. Lin, and M. S. Brown, “A new in-camera imaging model for color computer vision and its application,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 12, pp. 2289–2302, 2012.
  4. S. Paris and F. Durand, “A fast approximation of the bilateral filter using a signal processing approach,” International journal of computer vision, vol. 81, pp. 24–52, 2009.
  5. C. Guo, C. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and R. Cong, “Zero-reference deep curve estimation for low-light image enhancement,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 1780–1789.
  6. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232.
  7. V. Bychkovsky, S. Paris, E. Chan, and F. Durand, “Learning photographic global tonal adjustment with a database of input/output image pairs,” in CVPR 2011.   IEEE, 2011, pp. 97–104.
  8. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  9. D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of the seventh IEEE international conference on computer vision, vol. 2.   Ieee, 1999, pp. 1150–1157.
  10. J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Large-scale scene recognition from abbey to zoo,” in 2010 IEEE computer society conference on computer vision and pattern recognition.   IEEE, 2010, pp. 3485–3492.
  11. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
  12. D. C. Lee, M. Hebert, and T. Kanade, “Geometric reasoning for single image structure recovery,” in 2009 IEEE conference on computer vision and pattern recognition.   IEEE, 2009, pp. 2136–2143.
  13. R. Nguyen, D. K. Prasad, and M. S. Brown, “Raw-to-raw: Mapping between image sensor color responses,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3398–3405.
  14. A. Chakrabarti, D. Scharstein, and T. Zickler, “An empirical camera model for internet color vision.” in BMVC, vol. 1, no. 2.   Citeseer, 2009, p. 4.
  15. D. Akkaynak, T. Treibitz, T. Shlesinger, Y. Loya, R. Tamir, and D. Iluz, “What is the space of attenuation coefficients in underwater computer vision?” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4931–4940.
  16. M. Afifi and A. Abuolaim, “Semi-supervised raw-to-raw mapping,” arXiv preprint arXiv:2106.13883, 2021.
  17. D. Cheng, D. K. Prasad, and M. S. Brown, “Illuminant estimation for color constancy: why spatial-domain methods work and the role of the color distribution,” JOSA A, vol. 31, no. 5, pp. 1049–1058, 2014.
  18. H. C. Karaimer and M. S. Brown, “Improving color reproduction accuracy on cameras,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6440–6449.
  19. A. Munoz, J. I. Echevarria, F. J. Seron, J. Lopez-Moreno, M. Glencross, and D. Gutierrez, “Bssrdf estimation from single images,” in Computer Graphics Forum, vol. 30, no. 2.   Wiley Online Library, 2011, pp. 455–464.
  20. P. Debevec, C. Tchou, A. Gardner, T. Hawkins, C. Poullis, J. Stumpfel, A. Jones, N. Yun, P. Einarsson, T. Lundgren et al., “Estimating surface reflectance properties of a complex scene under captured natural illumination,” UNIVERSITY OF SOUTHERN CALIFORNIA LOS ANGELES, Tech. Rep., 2004.
  21. Y. Yu, P. Debevec, J. Malik, and T. Hawkins, “Inverse global illumination: Recovering reflectance models of real scenes from photographs,” in Proceedings of the 26th annual conference on Computer graphics and interactive techniques, 1999, pp. 215–224.
  22. S. W. Oh, M. S. Brown, M. Pollefeys, and S. J. Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2461–2469.
  23. S. Galliani, C. Lanaras, D. Marmanis, E. Baltsavias, and K. Schindler, “Learned spectral super-resolution,” arXiv preprint arXiv:1703.09470, 2017.
  24. K. Barnard and B. Funt, “Camera characterization for color research,” Color Research & Application, vol. 27, no. 3, pp. 152–163, 2002.
  25. Y.-C. Lo, C.-C. Chang, H.-C. Chiu, Y.-H. Huang, C.-P. Chen, Y.-L. Chang, and K. Jou, “Clcc: Contrastive learning for color constancy,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 8053–8063.
  26. M. El Helou, R. Zhou, S. Susstrunk, and R. Timofte, “Ntire 2021 depth guided image relighting challenge,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 566–577.
  27. M. Afifi, J. T. Barron, C. LeGendre, Y.-T. Tsai, and F. Bleibel, “Cross-camera convolutional color constancy,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1981–1990.
  28. M. Afifi, A. Abdelhamed, A. Abuolaim, A. Punnappurath, and M. S. Brown, “Cie xyz net: Unprocessing images for low-level computer vision tasks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 4688–4700, 2021.
  29. Z. Lou, T. Gevers, N. Hu, M. P. Lucassen et al., “Color constancy by deep learning.” in BMVC, 2015, pp. 76–1.
  30. A. Alsam and G. Finlayson, “Recovering spectral sensitivities with uncertainty,” in Conference on Colour in Graphics, Imaging, and Vision, vol. 2002, no. 1.   Society for Imaging Science and Technology, 2002, pp. 22–26.
  31. J. Y. Hardeberg, “On the spectral dimensionality of object colours,” in Conference on Colour in Graphics, Imaging, and Vision, vol. 2002, no. 1.   Society for Imaging Science and Technology, 2002, pp. 480–485.
  32. J. P. Parkkinen, J. Hallikainen, and T. Jaaskelainen, “Characteristic spectra of munsell colors,” JOSA A, vol. 6, no. 2, pp. 318–322, 1989.
  33. F. M. Martínez-Verdú, J. Pujol Ramo, P. Capilla Perea et al., “Calculation of the color matching functions of digital cameras from their complete spectral sensitivities,” Journal of Imaging Science and Technology, vol. 46, pp. 15–25, 2002.
  34. “Adobe Digital Netagive (DNG) specification, version 1.6.0.0,” https://helpx.adobe.com/content/dam/help/en/photoshop/pdf/dng_spec_1_6_0_0.pdf, accessed: 2023-04-02.
  35. M. M. Darrodi, G. Finlayson, T. Goodman, and M. Mackiewicz, “Reference data set for camera spectral sensitivity estimation,” JOSA A, vol. 32, no. 3, pp. 381–391, 2015.
  36. L. MacDonald and W. Ji, “Colour characterisation of a high-resolution digital camera,” in Conference on Colour in Graphics, Imaging, and Vision, vol. 2002, no. 1.   Society for Imaging Science and Technology, 2002, pp. 433–437.
  37. P. L. Vora, J. E. Farrell, J. D. Tietz, and D. H. Brainard, “Digital color cameras—2—spectral response,” 1997.
  38. P. M. Hubel, D. Sherman, and J. E. Farrell, “A comparison of methods of sensor spectral sensitivity estimation,” in Color and Imaging Conference, vol. 1994, no. 1.   Society for Imaging Science and Technology, 1994, pp. 45–48.
  39. J. Y. Hardeberg, H. Brettel, and F. J. Schmitt, “Spectral characterization of electronic cameras,” in SYBEN-Broadband European Networks and Electronic Image Capture and Publishing.   International Society for Optics and Photonics, 1998, pp. 100–109.
  40. G. D. Finlayson, S. Hordley, and P. M. Hubel, “Recovering device sensitivities with quadratic programming,” in Color and imaging conference, vol. 1998, no. 1.   Society for Imaging Science and Technology, 1998, pp. 90–95.
  41. K. Barnard and B. V. Funt, “Camera calibration for color research,” in Human vision and electronic imaging IV, vol. 3644.   SPIE, 1999, pp. 576–585.
  42. B. Dyas, “Robust color sensor response characterization,” in Color and Imaging Conference, vol. 2000, no. 1.   Society for Imaging Science and Technology, 2000, pp. 144–148.
  43. P. Carvalho, A. Santos, A. Dourado, and B. Ribeiro, “Learning spectral calibration parameters for color inspection,” in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, vol. 2.   IEEE, 2001, pp. 660–667.
  44. S. Quan, N. Ohta, and X. Jiang, “Comparative study on sensor spectral sensitivity estimation,” in Color Imaging VIII: Processing, Hardcopy, and Applications, vol. 5008.   SPIE, 2003, pp. 209–220.
  45. J. M. DiCarlo, G. E. Montgomery, and S. W. Trovinger, “Emissive chart for imager calibration,” in Color and Imaging Conference, vol. 2004, no. 1.   Society for Imaging Science and Technology, 2004, pp. 295–301.
  46. M. Ebner, “Estimating the spectral sensitivity of a digital sensor using calibration targets,” in Proceedings of the 9th annual conference on Genetic and evolutionary computation, 2007, pp. 642–649.
  47. C. Mauer and D. Wueller, “Measuring the spectral response with a set of interference filters,” in IS&T/SPIE Electronic Imaging.   International Society for Optics and Photonics, 2009, pp. 72 500S–72 500S.
  48. H. Zhao, R. Kawakami, R. T. Tan, and K. Ikeuchi, “Estimating basis functions for spectral sensitivity of digital cameras,” in Meeting on Image Recognition and Understanding, vol. 2009, no. 1, 2009.
  49. M. Rump, A. Zinke, and R. Klein, “Practical spectral characterization of trichromatic cameras,” in Proceedings of the 2011 SIGGRAPH Asia Conference, 2011, pp. 1–10.
  50. T. W. Pike, “Using digital cameras to investigate animal colouration: estimating sensor sensitivity functions,” Behavioral Ecology and Sociobiology, vol. 65, pp. 849–858, 2011.
  51. S. Han, Y. Matsushita, I. Sato, T. Okabe, and Y. Sato, “Camera spectral sensitivity estimation from a single image under unknown illumination by using fluorescence,” in Proc. IEEE CVPR, 2012, pp. 805–812.
  52. J. Jiang, D. Liu, J. Gu, and S. Süsstrunk, “What is the space of spectral sensitivity functions for digital color cameras?” in IEEE Workshop on Applications of Computer Vision (WACV), 2013, pp. 168–179.
  53. D. Prasad, R. Nguyen, and M. Brown, “Quick approximation of camera’s spectral response from casual lighting,” in Proc. IEEE ICCV Workshops, 2013, pp. 844–851.
  54. R. Kawakami, H. Zhao, R. T. Tan, and K. Ikeuchi, “Camera spectral sensitivity and white balance estimation from sky images,” IJCV, vol. 105, no. 3, pp. 187–204, 2013.
  55. D. L. Bongiorno, M. Bryson, D. G. Dansereau, and S. B. Williams, “Spectral characterization of cots rgb cameras using a linear variable edge filter,” in Digital Photography IX, vol. 8660.   SPIE, 2013, pp. 157–166.
  56. C. P. Huynh, A. Robles-Kelly et al., “Recovery of spectral sensitivity functions from a colour chart image under unknown spectrally smooth illumination.” in ICPR, 2014, pp. 708–713.
  57. L. W. MacDonald, “Determining camera spectral responsivity with multispectral transmission filters,” in Color and Imaging Conference, vol. 23.   Society for Imaging Science and Technology, 2015, pp. 12–17.
  58. P. Bartczak, A. Gebejes, P. Falt, J. Parkkinen, and P. Silfstein, “Led-based spectrally tunable light source for camera characterization,” in 2015 Colour and Visual Computing Symposium (CVCS).   IEEE, 2015, pp. 1–5.
  59. G. Finlayson, M. M. Darrodi, and M. Mackiewicz, “Rank-based camera spectral sensitivity estimation,” JOSA A, vol. 33, no. 4, pp. 589–599, 2016.
  60. A. Manakov, “Evaluation of computational radiometric and spectral sensor calibration techniques,” in Optics, Photonics and Digital Technologies for Imaging Applications IV, vol. 9896.   SPIE, 2016, pp. 130–143.
  61. L. Zhang, Y. Fu, Y. Zheng, and H. Huang, “Camera spectral sensitivity, illumination and spectral reflectance estimation for a hybrid hyperspectral image capture system,” in 2017 IEEE International Conference on Image Processing (ICIP).   IEEE, 2017, pp. 545–545.
  62. S. Chaji, A. Pourreza, H. Pourreza, and M. Rouhani, “Estimation of the camera spectral sensitivity function using neural learning and architecture,” JOSA A, vol. 35, no. 6, pp. 850–858, 2018.
  63. A. Karge, I. Rieger, B. Eberhardt, and A. Schilling, “Using chromaticity error minimisation for fast camera spectral responsivity measurement,” in Color and Imaging Conference, vol. 2018, no. 1.   Society for Imaging Science and Technology, 2018, pp. 67–74.
  64. J. Zhu, X. Xie, N. Liao, Z. Zhang, W. Wu, and L. Lv, “Spectral sensitivity estimation of trichromatic camera based on orthogonal test and window filtering,” Optics Express, vol. 28, no. 19, pp. 28 085–28 100, 2020.
  65. M. E. Toivonen and A. Klami, “Practical camera sensor spectral response and uncertainty estimation,” Journal of Imaging, vol. 6, no. 8, p. 79, 2020.
  66. Y. Ji, Y. Kwak, S. M. Park, and Y. L. Kim, “Compressive recovery of smartphone rgb spectral sensitivity functions,” Optics Express, vol. 29, no. 8, pp. 11 947–11 961, 2021.
  67. S. Tominaga, S. Nishi, and R. Ohtera, “Measurement and estimation of spectral sensitivity functions for mobile phone cameras,” Sensors, vol. 21, no. 15, p. 4985, 2021.
  68. L. Ma, Q. Gao, S. Chen, C. Li, and K. Xiao, “Recovery of camera spectral sensitivity based on multi-objective optimization method,” in First Optics Frontier Conference, vol. 11850.   SPIE, 2021, pp. 304–309.
  69. B. Xu, L. Ma, and P. Li, “Rank-based camera spectral sensitivity estimation under multiple illuminations,” in Second Optics Frontier Conference (OFS 2022), vol. 12307.   SPIE, 2022, pp. 46–53.
  70. H. Fan and M. R. Luo, “Camera spectral sensitivity estimation based on a display,” in Innovative Technologies for Printing and Packaging.   Springer, 2023, pp. 24–30.
  71. “Rawsamples.ch,” http://rawsamples.ch/index.php/en/, accessed: 2023-04-11.
  72. “Ilia Sibiryakov’s github,” https://github.com/ilia3101/ExtractAdobeCameraMatrices, accessed: 2023-04-11.
  73. “DataThief III by B. Tummers, 2006,” https://datathief.org/, accessed: 2023-04-11.
  74. M. Stevens, C. A. Párraga, I. C. Cuthill, J. C. Partridge, and T. S. Troscianko, “Using digital photography to study animal coloration,” Biological Journal of the Linnean society, vol. 90, no. 2, pp. 211–237, 2007.
  75. D. Akkaynak, E. Chan, J. J. Allen, and R. T. Hanlon, “Using spectrometry and photography to study color underwater,” in Proc. MTS/IEEE OCEANS, 2011.
  76. E. Berra, S. Gibson-Poole, A. MacArthur, R. Gaulton, and A. Hamilton, “Estimation of the spectral sensitivity functions of un-modified and modified commercial off-the-shelf digital cameras to enable their use as a multispectral imaging system for uavs,” in International Conference on Unmanned Aerial Vehicles in Geomatics.   Newcastle University, 2015.
  77. M. Brady and G. E. Legge, “Camera calibration for natural image studies and vision research,” JOSA A, vol. 26, no. 1, pp. 30–42, 2009.
  78. C. P. Huynh and A. Robles-Kelly, “Comparative colorimetric simulation and evaluation of digital cameras using spectroscopy data,” in 9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications (DICTA 2007).   IEEE, 2007, pp. 309–316.
  79. V. Lebourgeois, A. Bégué, S. Labbé, B. Mallavan, L. Prévot, and B. Roux, “Can commercial digital cameras be used as multispectral sensors? a crop monitoring test,” Sensors, vol. 8, no. 11, pp. 7300–7322, 2008.
  80. M. V. Conde, R. Timofte, Y. Huang, J. Peng, C. Chen, C. Li, E. Pérez-Pellitero, F. Song, F. Bai, S. Liu et al., “Reversed image signal processing and raw reconstruction. aim 2022 challenge report,” in European Conference on Computer Vision.   Springer, 2022, pp. 3–26.
  81. R. M. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single rgb image,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VII 13.   Springer, 2014, pp. 186–201.
  82. N. Akhtar, F. Shafait, and A. Mian, “Sparse spatio-spectral representation for hyperspectral image super-resolution,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VII 13.   Springer, 2014, pp. 63–78.
  83. ——, “Hierarchical beta process with gaussian process prior for hyperspectral image super resolution,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14.   Springer, 2016, pp. 103–120.
  84. X. Luo, X. Zhang, P. Yoo, R. Martin-Brualla, J. Lawrence, and S. M. Seitz, “Time-travel rephotography,” ACM Transactions on Graphics (TOG), vol. 40, no. 6, pp. 1–12, 2021.
  85. H. C. Karaimer and M. S. Brown, “A software platform for manipulating the camera imaging pipeline,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14.   Springer, 2016, pp. 429–444.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com