Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Emergent Visual Sensors for Autonomous Vehicles (2205.09383v2)

Published 19 May 2022 in cs.CV and cs.RO

Abstract: Autonomous vehicles rely on perception systems to understand their surroundings for further navigation missions. Cameras are essential for perception systems due to the advantages of object detection and recognition provided by modern computer vision algorithms, comparing to other sensors, such as LiDARs and radars. However, limited by its inherent imaging principle, a standard RGB camera may perform poorly in a variety of adverse scenarios, including but not limited to: low illumination, high contrast, bad weather such as fog/rain/snow, etc. Meanwhile, estimating the 3D information from the 2D image detection is generally more difficult when compared to LiDARs or radars. Several new sensing technologies have emerged in recent years to address the limitations of conventional RGB cameras. In this paper, we review the principles of four novel image sensors: infrared cameras, range-gated cameras, polarization cameras, and event cameras. Their comparative advantages, existing or potential applications, and corresponding data processing algorithms are all presented in a systematic manner. We expect that this study will assist practitioners in the autonomous driving society with new perspectives and insights.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (143)
  1. C. Urmson et al., “Autonomous driving in urban environments: Boss and the Urban Challenge,” Journal of Field Robotics, vol. 25, pp. 425–466, 2008.
  2. “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles,” SAE International, Tech. Rep., 2018.
  3. Y. Li and J. Ibanez-Guzman, “Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems,” IEEE Signal Processing Magazine, vol. 37, pp. 50–61, 2020.
  4. S. M. Patole, M. Torlak, D. Wang, and M. Ali, “Automotive radars: A review of signal processing techniques,” IEEE Signal Processing Magazine, vol. 34, pp. 22–35, 2017.
  5. J. Van Brummelen, M. O’Brien, D. Gruyer, and H. Najjaran, “Autonomous vehicle perception: The technology of today and tomorrow,” Transportation Research Part C: Emerging Technologies, vol. 89, pp. 384–406, 2018.
  6. E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda, “A survey of autonomous driving: Common practices and emerging technologies,” IEEE Access, vol. 8, pp. 58 443–58 469, 2020.
  7. “Standard tables for reference solar spectral irradiances: Direct normal and hemispherical on 37° tilted surface,” American Society for Testing Materials, Standard, 2012.
  8. J. D. Boullough et al., “An investigation of headlamp glare: Intensity, spectrum and size,” 2003.
  9. UNECE, “Regulation of no 112 of the economic commission for europe of the united nations (un/ece),” Tech. Rep., 2014.
  10. P. Duthon, M. Colomb, and F. Bernardin, “Light transmission in fog: The influence of wavelength on the extinction coefficient,” Applied Sciences, vol. 9, pp. 2843–2854, 2019.
  11. K. Murari, R. Etienne-Cummings, N. Thakor, and G. Cauwenberghs, “Which photodiode to use: A comparison of cmos-compatible structures,” IEEE Sensors Journal, vol. 9, pp. 752–760, 2009.
  12. K. Weikl, D. Schroeder, and W. Stechele, “Optimization of automotive color filter arrays for traffic light color separation,” in Color and Imaging Conference, 2020.
  13. M. Roser and P. Lenz, “Camera-based bidirectional reflectance measurement for road surface reflectivity classification,” in IEEE Intelligent Vehicles Symposium, 2010.
  14. P. Duthon, F. Bernardin, F. Chausse, and M. Colomb, “Methodology used to evaluate computer vision algorithms in adverse weather conditions,” in Transportation Research Procedia, 2016.
  15. OmniVision. (2019) Rgb-ir technology. [Online]. Available: https://www.ovt.com/purecel-pixel-tech/rgb-ir-technology/faqs
  16. M. Brown and S. Susstrunk, “Multispectral sift for scene category recognition,” in IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR), 2011.
  17. C. Fredembach and S. Susstrunk, “Colouring the near-infrared,” in Proceeding of 16th Color and Imaging Conference, 2008.
  18. J. R. Dean, “Using near-infrared photography to better study snow microstructure and its variability over time and space,” Master’s thesis, Boise State University, 2016.
  19. OmniVision. (2019) Nyxel® technology generation2. [Online]. Available: https://www.ovt.com/purecel-pixel-tech/nyxel-technology-generation-2
  20. E. de Borniol et al., “High-performance 640 x 512 pixel hybrid ingaas image sensor for night vision,” in Proc. SPIE 8353, Infrared Technology and Applications XXXVIII, 2012.
  21. Z. Chen, X. Wang, and R. Liang, “Rgb-nir multispectral camera,” Optics Express, vol. 22, 2014.
  22. Y. M. Lu, C. Fredembach, M. Vetterli, and S. Susstrunk, “Designing color filter arrays for the joint capture of visible and near-infrared images,” in 16th IEEE International Conference on Image Processing (ICIP), 2016.
  23. C. Park and M. G. Kang, “Color restoration of rgbn multispectral filter array sensor images based on spectral decomposition,” Sensors, vol. 16, no. 719, 2016.
  24. O. Skorka, P. Kane, and R. Ispasoiu, “Color correction for rgb sensors with dual-band filters for incabin imaging applications,” in Electronic Imaging, Autonomous Vehicles and Machines Conference, 2019.
  25. B. Geelen, N. Spooren, K. Tack, A. Lambrechts, and M. Jayapala, “System-level analysis and design for rgb-nir cmos camera,” in Proc. SPIE 10110, Photonic Instrumentation Engineering IV, 2017.
  26. M. Dummer, K. Johnson, S. Rothwell, K. Tatah, and M. Hibbs-Brenner, “The role of vcsels in 3d sensing and lidar,” in Proc. SPIE 11692, Optical Interconnects XXI, 2021.
  27. R. H. Vollmerhausen, R. G. Driggers, and V. A. Hodgkin, “Night illumination in the near- and short-wave infrared spectral bands and the potential for silicon and indium-gallium-arsenide imagers to perform night targeting,” Optical Engineering, vol. 52, 2013.
  28. F. Rutz et al., “Ingaas infrared detector development for swir imaging applications,” in Proceedings Volume 8896, Electro-Optical and Infrared Systems: Technology and Applications X, 2013.
  29. M. P. Hansen and D. S. Malchow, “Overview of SWIR detectors, cameras, and applications,” in Proc. SPIE 6939, Thermosense XXX, 2008.
  30. F. Lv, Y. Zheng, B. Zhang, and F. Lu, “Turn a Silicon Camera into an InGaAs Camera,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  31. X. Dai et al., “Tirnet: Object detection in thermal infrared images for autonomous driving,” Applied Intelligence, vol. 51, p. 1244–1261, 2020.
  32. R. Bhan et al., “Uncooled infrared microbolometer arrays and their characterisation techniques,” Defence Science Journal, vol. 59, pp. 580–589, 2009.
  33. J. Jung et al., “Infrared broadband metasurface absorber for reducing the thermal mass of a microbolometer,” Scientific Reports, vol. 7, no. 430, 2017.
  34. J. Tissot et al., “Uncooled microbolometer detector: recent developments at ulis,” Opto-Electronics Review, vol. 14, pp. 25–32, 2006.
  35. J.-J. Yon et al., “Latest amorphous silicon microbolometer developments at leti-lir,” in Proc. SPIE 6940, Infrared Technology and Applications XXXIV, 2008.
  36. L. Yu et al., “Low-cost microbolometer type infrared detectors,” Micromachines, vol. 9, p. 800, 2020.
  37. F. Niklaus et al., “Mems-based uncooled infrared bolometer arrays: a review,” in Proc. SPIE 6836, MEMS/MOEMS Technologies and Applications III, 2007.
  38. R. Gade and T. Moeslund, “Thermal cameras and applications,” Machine Vision & Applications, vol. 25, pp. 245–262, 2014.
  39. J.-E. Kallhammer, “Night vision: requirements and possible roadmap for fir and nir systems,” in Proc. SPIE 6198, Photonics in the Automobile II, 2006.
  40. O. Tsimhoni, J. Bargman, J. Minoda, and M. Flannagan, “Pedestrian detection with near and far infrared night vision enhancement,” 2004.
  41. O. Tsimhoni and M. Flannagan, “Pedestrian detection with night vision systems enhanced by automatic warnings,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2006.
  42. K. M. Judd, M. P. Thornton, and A. A. Richards, “Automotive sensing: Assessing the impact of fog on lwir, mwir, swir, visible and lidar imaging performance,” in Proc. SPIE 11002, Infrared Technology and Applications XLV, 2019.
  43. N. Pinchon et al., “All-weather vision for automotive safety: Which spectral band?” in Advanced Microsystems for Automotive Applications, 2018.
  44. N. S. Martinelli and S. A. Boulanger, “Cadillac deville thermal imaging night vision system,” in SAE Technical Paper Series #2000-01-0323, 2000.
  45. T. Tsuji, H. Hattori, M. Watanabe, and N. Nagaoka, “Development of night-vision system,” in IEEE Transactions on Intelligent Transportation Systems, vol. 3, 2002, pp. 203–209.
  46. F. Vicente, Z. Huang, X. Xiong, F. Torre, W. Zhang, and D. Levi, “Driver gaze tracking and eyes off the road detection system,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, pp. 2014–2027, 2015.
  47. L. Fridman, J. Lee, B. Reimer, and T. Victor, “Owl and lizard patterns of head pose and eye pose in driver gaze classification,” IET Computer Vision, vol. 10, pp. 308–314, 2016.
  48. H. Yoon et al., “Driver gaze detection based on deep residual networks using the combined single image of dual near-infrared cameras,” IEEE Access, pp. 93 448 – 93 461, 2019.
  49. A. Eskandarian, R. Sayed, P. Delaigue, J. Blum, and A. Mortazavi, “Advanced driver fatigue research,” 2007.
  50. C. Ahlstrom, K. Kircher, and A. Kircher, “A gaze-based driver distraction warning system and its effect on visual behavior,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, pp. 965 – 973, 2013.
  51. C. Schwarz, J. Gaspar, T. Miller, and R. Yousefian, “The detection of drowsiness using a driver monitoring system,” Traffic Injury Prevention, vol. 20, pp. 157–161, 2019.
  52. S. H. Park, H. S. Yoon, and K. R. Park, “Faster r-cnn and geometric transformation-based detection of driver’s eyes using multiple near-infrared camera sensors,” Sensors, vol. 19, pp. 1–29, 2019.
  53. D. Dinges et al., “Evaluation of techniques for ocular measurement as an index of fatigue and the basis for alertness management,” 1998.
  54. Q. Ji and X. Yang, “Real-time eye, gaze, and face pose tracking for monitoring driver vigilance,” Real-Time Imaging, vol. 8, pp. 357–377, 2002.
  55. M. J. Flores, J. M. Armigol, and A. de la Escalera, “Driver drowsiness detection system under infrared illumination for an intelligent vehicle,” IET Intelligent Transportation Systems, vol. 5, no. 4, pp. 241–251, 2009.
  56. I. Garcia, S. Bronte, L. M. Bergasa, J. Almazan, and J. Yebes, “Vision-based drowsiness detector for real driving conditions,” in IEEE Intelligent Vehicles Symposium, 2012.
  57. A. Dasgupta, D. Rahman, and A. Routray, “A smartphone-based drowsiness detection and warning system for automotive drivers,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 11, pp. 4045 – 4054, 2018.
  58. R. A. Naqv, M. Arsalan, G. Batchuluun, H. S. Yoon, and K. R. Park, “Deep learning-based gaze detection system for automobile drivers using a nir camera sensor,” Sensors, vol. 18, 2018.
  59. Y. Dong, Z. Hu, K. Uchimura, and N. Murayama, “Driver inattention monitoring system for intelligent vehicles: A review,” IEEE Transactions on Intelligent Transportation Systems, vol. 12, pp. 596–614, 2010.
  60. A. A. Akinyelu and P. Blignaut, “Convolutional neural network-based methods for eye gaze estimation: A survey,” IEEE Access, vol. 8, pp. 142 581–142 605, 2020.
  61. W. Wang, A. C. den Brinker, S. Stuijk, and G. de Haan, “Algorithmic principles of remote ppg,” IEEE Transactions on Biomedical Engineering, vol. 64, pp. 1479–1491, 2017.
  62. E. M. Nowara et al., “Sparseppg: Towards driver monitoring using camera-based vital signs estimation in near-infrared,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018.
  63. W. Wang and A. C. den Brinker, “Modified rgb cameras for infrared remote-ppg,” IEEE Transactions on Biomedical Engineering, vol. 67, pp. 2893 – 2904, 2020.
  64. K. Kurihara, D. Sugimura, and T. Hamamoto, “Non-contact heart rate estimation via adaptive rgb/nir signal fusion,” IEEE Transactions on Image Processing, vol. 30, pp. 6528 – 6543, 2021.
  65. P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2001.
  66. C. Burges, “A tutorial on support vector machines for pattern recognition,” Data Mininf and Knowledge Discovery, vol. 2, pp. 121–167, June 1998.
  67. Y. Fang et al., “A shape-independent method for pedestrian detection with far-infrared images,” IEEE Transactions on Vehicular Technology, vol. 53, no. 6, pp. 1679–1697, 2004.
  68. F. Xu et al., “Pedestrian detection and tracking with night vision,” IEEE Transactions on Intelligent Transportation Systems, vol. 6, pp. 63–71, 2005.
  69. D. Forslund and J. Bjarkerfur, “Night vision animal detection,” in IEEE Intelligent Vehicles Symposium, 2014.
  70. D. Savasturk et al., “A comparison study on vehicle detection in far infrared and regular images,” in IEEE 18th International Conference on Intelligent Transportation Systems, 2015.
  71. Z. Chen and X. Huang, “Pedestrian detection for autonomous vehicle using multi-spectral cameras,” IEEE Transactions on Intelligent Vehicles, vol. 4, pp. 211–219, 2019.
  72. M. Kristo et al., “Thermal object detection in difficult weather conditions using yolo,” IEEE Access, pp. 125 459–125 476, 2020.
  73. S. Ren et al., “Faster r-cnn: towards real-time object detection with region proposal networks,” in Proceedings of the 28th International Conference on Neural Information Processing Systems, 2015.
  74. W. Liu et al., “SSD: Single Shot MultiBox Detector,” in Proceeding of European Conference on Computer Vision, 2016.
  75. J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” arXiv:1804.02767, 2018.
  76. Z. Xu et al., “Benchmarking a large-scale fir dataset for on-road pedestrian detection,” Infrared Physics & Technology, vol. 96, pp. 199–208, 2019.
  77. P. Tumas et al., “Pedestrian detection in severe weather conditions,” IEEE Access, vol. 8, pp. 62 775–62 784, 2020.
  78. R. Grimming et al., “Lwir sensor parameters for deep learning object detectors,” OSA Continuum, vol. 4, pp. 529–541, 2021.
  79. R. Girshick, “Fast r-cnn,” in Proceedings of the 2015 IEEE International Conference on Computer Vision, 2015.
  80. J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Information Fusion, vol. 45, pp. 153–178, 2019.
  81. H. Choi et al., “Multi-spectral pedestrian detection based on accumulated object proposal with fully convolutional networks,” in 23rd International Conference on Pattern Recognition (ICPR), 2016.
  82. K. Park, S. Kim, and K. Sohn, “Unified multi-spectral pedestrian detection based on probabilistic fusion networks,” Pattern Recognition, vol. 80, pp. 143–155, 2018.
  83. G. Humblot-Renaux et al., “Thermal imaging on smart vehicles for person and road detection: Can a lazy approach work?” in IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), 2020.
  84. K. Takumi et al., “Multispectral object detection for autonomous vehicles,” in Proceedings of the on Thematic Workshops of ACM Multimedia, 2017.
  85. J. Wagner, V. Fisher, M. Herman, and S. Behnke, “Multispectral pedestrian detection using deep fusion convolutional neural networks,” in In Proceedings of 24th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 2016.
  86. J. Liu et al., “Multispectral deep neural networks for pedestrian detection,” in Proceedings of the British Machine Vision Conference (BMVC), 2016.
  87. C. Li et al., “Illumination-aware faster r-cnn for robust multispectral pedestrian detection,” Pattern Recognition, vol. 85, pp. 161–171, 2019.
  88. ——, “Multispectral pedestrian detection via simultaneous detection and segmentation,” in In Proceeding of British Machine Vision Conference, 2018.
  89. D. Guan et al., “Fusion of multispectral data through illumination-aware deep neural networks for pedestrian detection,” Information Fusion, vol. 50, pp. 148–157, 2019.
  90. H. Zhang et al., “Multispectral fusion for object detection with cyclic fuse-and-refine blocks,” in IEEE International Conference on Image Processing (ICIP), 2020.
  91. R. Yadav et al., “Cnn based color and thermal image fusion for object detection in automated driving,” in Irish Machine Vision and Image Processing (IMVIP 2020), 2020.
  92. D. Konig et al., “Fully convolutional region proposal networks for multispectral person detection,” in IEEE Conference on Computer Vision and Pattern Recognition Workshop, 2017.
  93. L. Zhang et al., “Weakly allighed cross-modal learning for multispectral pedestrian detection,” in IEEE/CVF International Conference on Computer Vision (ICCV), 2019.
  94. K. Dasgupta et al., “Spatio-contextual deep network based multimodal pedestrian detection for autonomous driving,” arXiv:2105.12713, 2021.
  95. G. Pelaez et al., “Road detection with thermal cameras through 3d information,” in IEEE Intelligent Vehicles Symposium, 2015.
  96. J. S. Yoon et al., “Thermal-infrared based drivable region detection,” in IEEE Intelligent Vehicles Symposium, 2016.
  97. L. Gillespie, “Apparent illuminance as a function of range in gated, laser night-viewing systems,” Journal of the Optical Society of America, vol. 56, pp. 883–887, 1966.
  98. I. M. Baker et al., “A low-noise laser-gated imaging system for long-range target identification,” in Proc. SPIE 5406, Infrared Technology and Applications, 2004.
  99. A. M. Pinto and A. C. Matos, “Maresye: A hybrid imaging system for underwater robotic applications,” Information Fusion, vol. 55, pp. 16–29, 2020.
  100. M. Bijelic et al., “Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
  101. O. David, N. Kopeika, and B. Weizer, “Range gated active night vision system for automobiles,” Applied Optics, 2006.
  102. N. Spooren et al., “RGB-NIR active gated imaging,” in Electro-Optical and Infrared Systems: Technology and Applications XIII, 2016.
  103. A. H. Willitsford et al., “Range-gated active short-wave infrared imaging for rain penetration,” Optical Engineering, vol. 60, no. 1, pp. 1 – 11, 2021.
  104. F. Rutz et al., “Ingaas apd matrix sensors for swir gated viewing,” Advanced Optical Technologies, vol. 8, pp. 445–450, 2019.
  105. S. Burri et al., “Architecture and applications of a high resolution gated spad image sensor,” Optics Express, vol. 22, pp. 17 573–17 589, 2014.
  106. K. Morimoto et al., “Megapixel time-gated spad image sensor for 2d and 3d imaging applications,” Optica, vol. 7, pp. 346–354, 2020.
  107. M. Bijelic, T. Gruber, and W. Ritter, “Benchmarking image sensors under adverse weather conditions for autonomous driving,” in IEEE Intelligent Vehicles Symposium (IV), 2018.
  108. S. Walz, T. Gruber, W. Ritter, and K. Dietmayer, “Uncertainty depth estimation with gated images for 3d reconstruction,” in IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), 2020.
  109. F. Christnacher, J.-M. Poyet, M. Laurenzis, J.-P. Moegline, and F. Taillade, “Bistatic range-gated active imaging in vehicles with leds or headlights illumination,” in Proc. SPIE 7675, Photonics in the Transportation Industry: Auto to Aerospace III, 2010.
  110. Y. Grauer, “Active gated imaging in driver assistance system,” Advanced Optical Technologies, vol. 3, pp. 151–160, 2014.
  111. Y. Grauer and E. Sonn, “Active gated imaging for automotive safety applications,” in Proc. SPIE Video Surveillance and Transportation Imaging Applications, 2015.
  112. F. Julca-Aguilar, J. Taylor, M. Bijelic, F. Mannan, E. Tseng, and F. Heide, “Gated3d: Monocular 3d object detection from temporal illumination cues,” arXiv:2102.03602, 2021.
  113. G. Tobias, F. Julca-Aguilar, M. Bijelic, and F. Heide, “Gated2depth: Real-time dense lidar from gated images,” in The IEEE International Conference on Computer Vision (ICCV), 2019.
  114. Lucid Vision Labs. (2018) Beyond conventional imaging: Sony’s polarized sensor. [Online]. Available: https://thinklucid.com/tech-briefs/polarization-explained-sony-polarized-sensor/
  115. Polarizer: Wikipedia. (2021) Polarizer. [Online]. Available: https://en.wikipedia.org/wiki/Polarizer
  116. J. J. Foster et al., “Polarisation vision: overcoming challenges of working with a property of light we barely see,” The Science of Nature, vol. 105, no. 27, pp. 1–26, 2018.
  117. O. Morel, F. Meriaudeau, C. Stolz, and P. Gorria, “Polarization imaging applied to 3d reconstruction of specular metallic surfaces,” in Proc. SPIE 5679, Machine Vision Applications in Industrial Inspection XIII, 2005, pp. 178–186.
  118. L. B. Wolff, “Polarization camera for computer vision with a beam splitter,” Journal of the Optical Society of America A, vol. 11, pp. 2935–2945, 1994.
  119. Y. Li et al., “Multiframe-based high dynamic range monocular vision system for advanced driver assistance systems,” IEEE Sensors Journal, vol. 15, pp. 5433–5441, 2015.
  120. Y. Wang et al., “Efficient road specular reflection removal based on gradient properties,” Multimedia Tools and Applications, vol. 77, pp. 30 615–30 631, 2018.
  121. F. Wang, S. Ainouz, C. Petitjean, and A. Bensrhair, “Specularity removal: A global energy minimization approach based on polarization imaging,” Computer Vision and Image Understanding, vol. 158, pp. 31–39, 2017.
  122. X. Wu et al., “Hdr reconstruction based on the polarization camera,” IEEE Robotics and Automation Letters, vol. 5, pp. 5113–5119, 2020.
  123. C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck, “Retinomorphic event-based vision sensors: Bioinspired cameras with spiking output,” Proceedings of the IEEE, vol. 102, no. 10, pp. 1470–1484, 2014.
  124. M. Gehrig, W. Aarents, D. Gehrig, and D. Scaramuzza, “DSEC: A stereo event camera dataset for driving scenarios,” IEEE Robotic and Automation Letters, 2021.
  125. F. Wang, S. Ainouz, F. Meriaudeau, and A. Bensrhair, “Polarization-based car detection,” in 25th IEEE International Conference on Image Processing (ICIP), 2018.
  126. R. Blin, S. Ainouz, S. Canu, and F. Meriaudeau, “Road scenes analysis in adverse weather conditions by polarization-encoded images and adapted deep learning,” in IEEE Intelligent Transportation Systems Conference (ITSC), 2019.
  127. ——, “A new multimodal rgb and polarimetric image dataset for road scenes analysis,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020.
  128. ——, “The polarlitis dataset: Road scenes under fog,” IEEE Transactions on Intelligent Transportation Systems, p. Early Access, 2021.
  129. M. Blanchon et al., “Outdoor scenes pixel-wise semantic segmentation using polarimetry and fully convolutional network,” in 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2019.
  130. K. Xiang, K. Yang, and K. Wang, “Polarization-driven semantic segmentation via efficient attention-bridged fusion,” Optics Express, vol. 29, pp. 4802–4820, 2021.
  131. G. Gallego, T. Delbruck, G. M. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-based vision: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 154–180, 2020.
  132. G. Chen and A. Knoll, “Event-based neuromorphic vision for autonomous driving: A paradigm shift for bio-inspired visual sensing and perception,” IEEE Signal Processing Magazine, vol. 37, no. 4, pp. 34–49, 2020.
  133. T. Finateu et al., “5.10 a 1280×720 back-illuminated stacked temporal contrast event-based vision sensor with 4.86µm pixels, 1.066geps readout, programmable event-rate controller and compressive data-formatting pipeline,” in IEEE International Solid- State Circuits Conference, 2020.
  134. Y. Suh et al., “A 1280×960 dynamic vision sensor with a 4.95-µm pixel pitch and motion artifact minimization,” in IEEE International Symposium on Circuits and Systems, 2020.
  135. S. Chen and M. Guo, “Live demonstration: Celex-v: A 1m pixel multi-mode event-based sensor,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.
  136. Scheerlinck et al., “Ced: Color event camera dataset,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.
  137. N. Qiao, H. Mostafa, F. Corradi, M. Osswald, F. Stefanini, D. Sumislawska, and G. Indiveri, “A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128k synapses,” Frontiers in Neuroscience, vol. 9, p. 141, 2015.
  138. F. Akopyan et al., “TrueNorth: Design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 34, no. 10, pp. 1537–1557, 2015.
  139. M. Davies, A. Wild, G. Orchard, Y. Sandamirskaya, G. A. F. Guerra, P. Joshi, P. Plank, and S. R. Risbud, “Advancing neuromorphic computing with loihi: A survey of results and outlook,” Proceedings of the IEEE, vol. 109, no. 5, pp. 911–934, 2021.
  140. D. Gehrig, H. Rebecq, G. Gallego, and D. Scaramuzza, “EKLT: Asynchronous photometric feature tracking using events and frames,” International Journal of Computer Vision, vol. 128, no. 3, pp. 601–618, 2020.
  141. H. Akolkar, S. H. Ieng, and R. Benosman, “Real-time high speed motion prediction using fast aperture-robust event-driven visual flow,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2020.
  142. U. M. Nunes and Y. Demiris, “Robust event-based vision model estimation by dispersion minimisation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2021.
  143. M. Osswald, S.-H. Ieng, R. Benosman, and G. Indiveri, “A spiking neural network model of 3d perception for event-based neuromorphic stereo vision systems,” Scientific Reports, vol. 7, no. 1, p. 40703, 2017.
Citations (17)

Summary

We haven't generated a summary for this paper yet.