Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pola4All: survey of polarimetric applications and an open-source toolkit to analyze polarization (2312.14697v1)

Published 22 Dec 2023 in cs.CV

Abstract: Polarization information of the light can provide rich cues for computer vision and scene understanding tasks, such as the type of material, pose, and shape of the objects. With the advent of new and cheap polarimetric sensors, this imaging modality is becoming accessible to a wider public for solving problems such as pose estimation, 3D reconstruction, underwater navigation, and depth estimation. However, we observe several limitations regarding the usage of this sensorial modality, as well as a lack of standards and publicly available tools to analyze polarization images. Furthermore, although polarization camera manufacturers usually provide acquisition tools to interface with their cameras, they rarely include processing algorithms that make use of the polarization information. In this paper, we review recent advances in applications that involve polarization imaging, including a comprehensive survey of recent advances on polarization for vision and robotics perception tasks. We also introduce a complete software toolkit that provides common standards to communicate with and process information from most of the existing micro-grid polarization cameras on the market. The toolkit also implements several image processing algorithms for this modality, and it is publicly available on GitHub: https://github.com/vibot-lab/Pola4all_JEI_2023.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (78)
  1. M. Garcia, C. Edmiston, R. Marinov, et al., “Bio-inspired color-polarization imager for real-time in situ imaging,” Optica 4, 1263–1271 (2017).
  2. Springer Berlin Heidelberg, Berlin, Heidelberg (2004).
  3. Z. Ding, C. Sun, H. Han, et al., “Calibration method for division-of-focal-plane polarimeters using nonuniform light,” IEEE Photonics Journal 13(1), 1–9 (2021).
  4. C. Lane, D. Rode, and T. Roesgen, “Calibration of a polarization image sensor andinvestigation of influencing factors,” Applied Optics 61 (2021).
  5. J. S. Tyo, “Design of optimal polarimeters: maximization of signal-to-noise ratio and minimization of systematic error,” Appl. Opt. 41, 619–630 (2002).
  6. J. Rodriguez, L. Lew-Yan-Voon, R. Martins, et al., “A practical calibration method for rgb micro-grid polarimetric cameras,” IEEE Robotics and Automation Letters 7(4), 9921–9928 (2022).
  7. W. A. P. Smith, R. Ramamoorthi, and S. Tozza, “Linear depth estimation from an uncalibrated, monocular polarisation image,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, et al., Eds., 109–125, Springer International Publishing, (Cham) (2016).
  8. D. Zhu and W. A. P. Smith, “Depth from a polarisation + rgb stereo pair,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7578–7587 (2019).
  9. Z. Cui, V. Larsson, and M. Pollefeys, “Polarimetric relative pose estimation,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2671–2680 (2019).
  10. Springer Berlin Heidelberg, Berlin, Heidelberg (1996).
  11. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems 25 (2012).
  12. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR abs/1409.1556 (2014).
  13. D. Sinha and M. El-Sharkawy, “Thin mobilenet: An enhanced mobilenet architecture,” in 2019 IEEE 10th annual ubiquitous computing, electronics & mobile communication conference (UEMCON), 0280–0285, IEEE (2019).
  14. Apress, Berkeley, CA (2021).
  15. S. C. Yurtkulu, Y. H. Şahin, and G. Unal, “Semantic segmentation with extended deeplabv3 architecture,” in 2019 27th Signal Processing and Communications Applications Conference (SIU), 1–4 (2019).
  16. K. He, X. Zhang, S. Ren, et al., “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778 (2016).
  17. A. Vaswani, N. Shazeer, N. Parmar, et al., “Attention is all you need,” in Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, et al., Eds., 30, 5998–6008, Curran Associates, Inc. (2017).
  18. A. Dosovitskiy, L. Beyer, A. Kolesnikov, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, (2021).
  19. X. Li, L. Yan, P. Qi, et al., “Polarimetric imaging via deep learning: A review,” Remote Sensing 15(6) (2023).
  20. T. Ono, Y. Kondo, L. Sun, et al., “Degree-of-linear-polarization-based color constancy,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19708–19717 (2022).
  21. M. Morimatsu, Y. Monno, M. Tanaka, et al., “Monochrome and color polarization demosaicking using edge-aware residual interpolation,” in 2020 IEEE International Conference on Image Processing (ICIP), 2571–2575 (2020).
  22. S. Wen, Y. Zheng, F. Lu, et al., “Joint chromatic and polarimetric demosaicing via sparse coding,” CoRR abs/1912.07308 (2019).
  23. J. Zhang, H. Luo, R. Liang, et al., “Sparse representation-based demosaicing method for microgrid polarimeter imagery,” Opt. Lett. 43, 3265–3268 (2018).
  24. H. Hu, Y. Lin, X. Li, et al., “Iplnet: a neural network for intensity-polarization imaging in low light,” Opt. Lett. 45, 6162–6165 (2020).
  25. C. Zhou, M. Teng, Y. Han, et al., “Learning to dehaze with polarization,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, et al., Eds., 34, 11487–11500, Curran Associates, Inc. (2021).
  26. J. Liu, J. Duan, Y. Hao, et al., “Semantic-guided polarization image fusion method based on a dual-discriminator gan,” Opt. Express 30, 43601–43621 (2022).
  27. C. Lei, X. Huang, M. Zhang, et al., “Polarized reflection removal with perfect alignment in the wild,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1747–1755 (2020).
  28. X. Li, J. Xu, L. Zhang, et al., “Underwater image restoration via stokes decomposition,” Opt. Lett. 47, 2854–2857 (2022).
  29. H. Hu, Y. Zhang, X. Li, et al., “Polarimetric underwater image recovery via deep learning,” Optics and Lasers in Engineering 133, 106152 (2020).
  30. K. Tanaka, Y. Mukaigawa, and A. Kadambi, “Polarized non-line-of-sight imaging,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2133–2142 (2020).
  31. K. O. Amer, M. Elbouz, A. Alfalou, et al., “Enhancing underwater optical imaging by using a low-pass polarization filter,” Opt. Express 27, 621–643 (2019).
  32. L. Shen and Y. Zhao, “Underwater image enhancement based on polarization imaging,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2020, 579–585 (2020).
  33. S. Wen, Y. Zheng, and F. Lu, “Polarization guided specular reflection separation,” IEEE Transactions on Image Processing 30, 7280–7291 (2021).
  34. Y. Liang, R. Wakaki, S. Nobuhara, et al., “Multimodal material segmentation,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19768–19776 (2022).
  35. H. Mei, B. Dong, W. Dong, et al., “Glass segmentation using intensity and spectral polarization cues,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12612–12621 (2022).
  36. A. Kalra, V. Taamazyan, S. K. Rao, et al., “Deep polarization cues for transparent object segmentation,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8599–8608 (2020).
  37. N. Li, Y. Zhao, Q. Pan, et al., “Illumination-invariant road detection and tracking using lwir polarization characteristics,” ISPRS Journal of Photogrammetry and Remote Sensing 180, 357–369 (2021).
  38. N. Li, Y. Zhao, R. Wu, et al., “Polarization-guided road detection network for lwir division-of-focal-plane camera,” Opt. Lett. 46, 5679–5682 (2021).
  39. K. Xiang, K. Yang, and K. Wang, “Polarization-driven semantic segmentation via efficient attention-bridged fusion,” Opt. Express 29, 4802–4820 (2021).
  40. T. Ichikawa, M. Purri, R. Kawahara, et al., “Shape from sky: Polarimetric normal recovery under the sky,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14827–14836 (2021).
  41. C. Lei, C. Qi, J. Xie, et al., “Shape from polarization for complex scenes in the wild,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12632–12641 (2022).
  42. Y. Fukao, R. Kawahara, S. Nobuhara, et al., “Polarimetric normal stereo,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 682–690 (2021).
  43. Y. Ba, A. Gilbert, F. Wang, et al., “Deep shape from polarization,” in Computer Vision – ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, et al., Eds., 554–571, Springer International Publishing, (Cham) (2020).
  44. V. Deschaintre, Y. Lin, and A. Ghosh, “Deep polarization imaging for 3d shape and svbrdf acquisition,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15562–15571 (2021).
  45. K. Berger, R. Voorhies, and L. H. Matthies, “Depth from stereo polarization in specular scenes for urban robotics,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), 1966–1973 (2017).
  46. M. Blanchon, D. Sidibé, O. Morel, et al., “P2d: a self-supervised method for depth estimation from polarimetry,” in 2020 25th International Conference on Pattern Recognition (ICPR), 7357–7364 (2021).
  47. M. Shakeri, S. Y. Loo, H. Zhang, et al., “Polarimetric monocular dense mapping using relative deep depth prior,” IEEE Robotics and Automation Letters 6(3), 4512–4519 (2021).
  48. J. Zhao, Y. Monno, and M. Okutomi, “Polarimetric multi-view inverse rendering,” in Computer Vision – ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, et al., Eds., 85–102, Springer International Publishing, (Cham) (2020).
  49. Y. Kondo, T. Ono, L. Sun, et al., “Accurate polarimetric brdf for real polarization scene rendering,” in Computer Vision – ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, et al., Eds., 220–236, Springer International Publishing, (Cham) (2020).
  50. D. Gao, Y. Li, P. Ruhkamp, et al., “Polarimetric pose prediction,” in Computer Vision – ECCV 2022, S. Avidan, G. Brostow, M. Cissé, et al., Eds., 735–752, Springer Nature Switzerland, (Cham) (2022).
  51. S. Zou, X. Zuo, S. Wang, et al., “Human pose and shape estimation from single polarization images,” IEEE Transactions on Multimedia 25, 3560–3572 (2023).
  52. M. Tzabari and Y. Y. Schechner, “Polarized optical-flow gyroscope,” in Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI, 363–381, Springer-Verlag, (Berlin, Heidelberg) (2020).
  53. P. Hu, J. Yang, L. Guo, et al., “Solar-tracking methodology based on refraction-polarization in snell’s window for underwater navigation,” Chinese Journal of Aeronautics 35(3), 380–389 (2022).
  54. S. B. Powell and V. Gruev, “Calibration methods for division-of-focal-plane polarimeters,” Opt. Express 21, 21039–21055 (2013).
  55. S. Kajiyama, T. Piao, R. Kawahara, et al., “Separating partially-polarized diffuse and specular reflection components under unpolarized light sources,” in 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2548–2557 (2023).
  56. S. Umeyama and G. Godin, “Separation of diffuse and specular components of surface reflection by use of polarization and statistical analysis of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence 26(5), 639–647 (2004).
  57. S. Boyd, N. Parikh, E. Chu, et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2011).
  58. D. Kiku, Y. Monno, M. Tanaka, et al., “Beyond color difference: Residual interpolation for color image demosaicking,” IEEE Transactions on Image Processing 25(3), 1288–1300 (2016).
  59. L. Wolff, “Polarization-based material classification from specular reflection,” IEEE Transactions on Pattern Analysis and Machine Intelligence 12(11), 1059–1071 (1990).
  60. H. Chen and L. Wolff, “Polarization phase-based method for material classification and object recognition in computer vision,” in Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 128–135 (1996).
  61. S. Tominaga and A. Kimachi, “Polarization imaging for material classification,” Optical Engineering 47(12), 123201 (2008).
  62. V. Thilak, D. G. Voelz, and C. D. Creusere, “Polarization-based index of refraction and reflection angle estimation for remote sensing applications,” Appl. Opt. 46, 7527–7536 (2007).
  63. M. P. Khaing and M. Masayuki, “Transparent object detection using convolutional neural network,” in Big Data Analysis and Deep Learning Applications, T. T. Zin and J. C.-W. Lin, Eds., 86–93, Springer Singapore, (Singapore) (2019).
  64. Z. Peng, W. Huang, S. Gu, et al., “Conformer: Local features coupling global representations for visual recognition,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 357–366 (2021).
  65. M. Cordts, M. Omran, S. Ramos, et al., “The cityscapes dataset for semantic urban scene understanding,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3213–3223 (2016).
  66. Miyazaki, Tan, Hara, et al., “Polarization-based inverse rendering from a single view,” in Proceedings Ninth IEEE International Conference on Computer Vision, 982–987 vol.2 (2003).
  67. O. Morel, F. Meriaudeau, C. Stolz, et al., “Polarization imaging applied to 3D reconstruction of specular metallic surfaces,” in Machine Vision Applications in Industrial Inspection XIII, J. R. Price and F. Meriaudeau, Eds., 5679, 178 – 186, International Society for Optics and Photonics, SPIE (2005).
  68. G. A. Atkinson and E. R. Hancock, “Surface reconstruction using polarization and photometric stereo,” in Computer Analysis of Images and Patterns, W. G. Kropatsch, M. Kampel, and A. Hanbury, Eds., 466–473, Springer Berlin Heidelberg, (Berlin, Heidelberg) (2007).
  69. L. Zhang and E. R. Hancock, “A comprehensive polarisation model for surface orientation recovery,” in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), 3791–3794 (2012).
  70. C. Godard, O. M. Aodha, M. Firman, et al., “Digging into self-supervised monocular depth estimation,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 3827–3837 (2019).
  71. K. Kim, A. Torii, and M. Okutomi, “Multi-view inverse rendering under arbitrary illumination and albedo,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, et al., Eds., 750–767, Springer International Publishing, (Cham) (2016).
  72. R. Ranftl, K. Lasinger, D. Hafner, et al., “Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer,” IEEE Transactions on Pattern Analysis and Machine Intelligence 44(3), 1623–1637 (2022).
  73. J. L. Schönberger and J.-M. Frahm, “Structure-from-motion revisited,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4104–4113 (2016).
  74. M. Loper, N. Mahmood, J. Romero, et al., “Smpl: A skinned multi-person linear model,” ACM Trans. Graph. 34 (2015).
  75. T. Du, C. Tian, J. Yang, et al., “An autonomous initial alignment and observability analysis for sins with bio-inspired polarized skylight sensors,” IEEE Sensors Journal 20(14), 7941–7956 (2020).
  76. J. Li, J. Chu, R. Zhang, et al., “Bio-inspired attitude measurement method using a polarization skylight and a gravitational field,” Appl. Opt. 59, 2955–2962 (2020).
  77. J. Rodriguez, L. Lew-Yan-Voon, R. Martins, et al., “Pola4all: A survey of polarimetric applications and an open-source software to analyze polarimetric images - repository.” https://github.com/vibot-lab/Pola4all_JEI_2023.
  78. L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image and Vision Computing 15(2), 81–93 (1997).
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com