Real-Time Environment Condition Classification for Autonomous Vehicles (2405.19305v1)
Abstract: Current autonomous driving technologies are being rolled out in geo-fenced areas with well-defined operation conditions such as time of operation, area, weather conditions and road conditions. In this way, challenging conditions as adverse weather, slippery road or densely-populated city centers can be excluded. In order to lift the geo-fenced restriction and allow a more dynamic availability of autonomous driving functions, it is necessary for the vehicle to autonomously perform an environment condition assessment in real time to identify when the system cannot operate safely and either stop operation or require the resting passenger to take control. In particular, adverse-weather challenges are a fundamental limitation as sensor performance degenerates quickly, prohibiting the use of sensors such as cameras to locate and monitor road signs, pedestrians or other vehicles. To address this issue, we train a deep learning model to identify outdoor weather and dangerous road conditions, enabling a quick reaction to new situations and environments. We achieve this by introducing an improved taxonomy and label hierarchy for a state-of-the-art adverse-weather dataset, relabelling it with a novel semi-automated labeling pipeline. Using the novel proposed dataset and hierarchy, we train RECNet, a deep learning model for the classification of environment conditions from a single RGB frame. We outperform baseline models by relative 16% in F1- Score, while maintaining a real-time capable performance of 20 Hz.
- G. Volk, S. Müller, A. Von Bernuth, D. Hospach, and O. Bringmann, “Towards robust cnn-based object detection through augmentation with synthetic rain variations,” in 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE, 2019, pp. 285–292.
- M. Hnewa and H. Radha, “Object detection under rainy conditions for autonomous vehicles: A review of state-of-the-art and emerging techniques,” IEEE Signal Processing Magazine, vol. 38, no. 1, pp. 53–67, 2020.
- S.-Y. Jhong, Y.-Y. Chen, C.-H. Hsia, S.-C. Lin, K.-H. Hsu, and C.-F. Lai, “Nighttime object detection system with lightweight deep network for internet of vehicles,” Journal of Real-Time Image Processing, vol. 18, no. 4, pp. 1141–1155, 2021.
- I. Morawski, Y.-A. Chen, Y.-S. Lin, and W. H. Hsu, “Nod: Taking a closer look at detection under extreme low-light conditions with night object detection dataset,” arXiv preprint arXiv:2110.10364, 2021.
- X. Wang, T. Xiao, Y. Jiang, S. Shao, J. Sun, and C. Shen, “Repulsion loss: Detecting pedestrians in a crowd,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7774–7783.
- H.-h. Xu, X.-q. Wang, D. Wang, B.-g. Duan, and T. Rui, “Object detection in crowded scenes via joint prediction,” Defence Technology, vol. 21, pp. 103–115, 2023.
- T. Charmet, V. Cherfaoui, J. Ibanez-Guzman, and A. Armand, “Overview of the operational design domain monitoring for safe intelligent vehicle navigation,” in 26th IEEE International Conference on Intelligent Transportation Systems (ITSC 2023), 2023.
- K. M. Gurumurthy, “Shared autonomous vehicle system designs for major metro areas: an examination of geofencing, ride-sharing, stop-location, and drivetrain decisions,” Ph.D. dissertation, 2020.
- A. Haug and S. Grosanic, “Usage of road weather sensors for automatic traffic control on motorways,” Transportation research procedia, vol. 15, pp. 537–547, 2016.
- M. R. Ibrahim, J. Haworth, and T. Cheng, “Weathernet: Recognising weather and visual conditions from street-level images using deep residual learning,” CoRR, vol. abs/1910.09910, 2019. [Online]. Available: http://arxiv.org/abs/1910.09910
- Q. A. Al-Haija, M. Gharaibeh, and A. Odeh, “Detection in adverse weather conditions for autonomous vehicles via deep learning,” AI, vol. 3, no. 2, pp. 303–317, 2022. [Online]. Available: https://www.mdpi.com/2673-2688/3/2/19
- L.-W. Kang, K.-L. Chou, and R.-H. Fu, “Deep learning-based weather image recognition,” in 2018 International Symposium on Computer, Consumer and Control (IS3C), 2018, pp. 384–387.
- Q. A. Al-Haija, M. A. Smadi, and S. Zein-Sabatto, “Multi-class weather classification using resnet-18 cnn for autonomous iot and cps applications,” in 2020 International Conference on Computational Science and Computational Intelligence (CSCI), 2020, pp. 1586–1591.
- M. Bijelic, T. Gruber, F. Mannan, F. Kraus, W. Ritter, K. Dietmayer, and F. Heide, “Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” CoRR, vol. abs/1704.04861, 2017. [Online]. Available: http://arxiv.org/abs/1704.04861
- M. Tan and Q. V. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” CoRR, vol. abs/1905.11946, 2019. [Online]. Available: http://arxiv.org/abs/1905.11946
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” CoRR, vol. abs/2010.11929, 2020. [Online]. Available: https://arxiv.org/abs/2010.11929
- W.-T. Chu, X.-Y. Zheng, and D.-S. Ding, “Image2weather: A large-scale image dataset for weather property estimation,” in 2016 IEEE Second International Conference on Multimedia Big Data (BigMM), 2016, pp. 137–144.
- J. C. V. Guerra, Z. Khanam, S. Ehsan, R. Stolkin, and K. McDonald-Maier, “Weather classification: A new multi-class dataset, data augmentation approach and comprehensive evaluations of convolutional neural networks,” in 2018 NASA/ESA Conference on Adaptive Hardware and Systems (AHS). IEEE, 2018, pp. 305–310.
- A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
- M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213–3223.
- Marco Introvigne (1 paper)
- Andrea Ramazzina (7 papers)
- Stefanie Walz (10 papers)
- Dominik Scheuble (5 papers)
- Mario Bijelic (24 papers)