Run-time Introspection of 2D Object Detection in Automated Driving Systems Using Learning Representations (2403.01172v1)
Abstract: Reliable detection of various objects and road users in the surrounding environment is crucial for the safe operation of automated driving systems (ADS). Despite recent progresses in developing highly accurate object detectors based on Deep Neural Networks (DNNs), they still remain prone to detection errors, which can lead to fatal consequences in safety-critical applications such as ADS. An effective remedy to this problem is to equip the system with run-time monitoring, named as introspection in the context of autonomous systems. Motivated by this, we introduce a novel introspection solution, which operates at the frame level for DNN-based 2D object detection and leverages neural network activation patterns. The proposed approach pre-processes the neural activation patterns of the object detector's backbone using several different modes. To provide extensive comparative analysis and fair comparison, we also adapt and implement several state-of-the-art (SOTA) introspection mechanisms for error detection in 2D object detection, using one-stage and two-stage object detectors evaluated on KITTI and BDD datasets. We compare the performance of the proposed solution in terms of error detection, adaptability to dataset shift, and, computational and memory resource requirements. Our performance evaluation shows that the proposed introspection solution outperforms SOTA methods, achieving an absolute reduction in the missed error ratio of 9% to 17% in the BDD dataset.
- U.S National Transportation Safety Board, “Collision between a sport utility vehicle operating with partial driving automation and a crash attenuator,” www.ntsb.gov/investigations/AccidentReports/Reports/HAR2001.pdf, Feb 2020.
- H. Y. Yatbaz, M. Dianati, and R. Woodman, “Introspection of dnn-based perception functions in automated driving systems: State-of-the-art and open research challenges,” IEEE Transactions on Intelligent Transportation Systems, 2023.
- “Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles.” [Online]. Available: https://doi.org/10.4271/j3016\_202104
- P. Koopman and M. Wagner, “Challenges in autonomous vehicle testing and validation,” SAE International Journal of Transportation Safety, vol. 4, no. 1, pp. 15–24, 2016.
- S. S. A. Zaidi, M. S. Ansari, A. Aslam, N. Kanwal, M. Asghar, and B. Lee, “A survey of modern deep learning based object detection models,” Digital Signal Processing, vol. 126, p. 103514, 2022.
- B. Li, P. H. Chan, G. Baris, M. D. Higgins, and V. Donzella, “Analysis of automotive camera sensor noise factors and impact on object detection,” IEEE Sensors Journal, vol. 22, no. 22, pp. 22 210–22 219, 2022.
- D. Miller, L. Nicholson, F. Dayoub, and N. Sunderhauf, “Dropout sampling for robust object detection in open-set conditions,” in Proceedings - IEEE International Conference on Robotics and Automation, 2018, pp. 3243–3249.
- Q. M. Rahman, N. Sünderhauf, and F. Dayoub, “Per-frame map prediction for continuous performance monitoring of object detection during deployment,” in 2021 IEEE Winter Conference on Applications of Computer Vision Workshops (WACVW), 2021, pp. 152–160.
- Q. Yang, H. Chen, Z. Chen, and J. Su, “Introspective false negative prediction for black-box object detectors in autonomous driving,” Sensors, vol. 21, no. 8, 2021. [Online]. Available: https://www.mdpi.com/1424-8220/21/8/2819
- X. Zhang, S. Oymak, and J. Chen, “Post-hoc models for performance estimation of machine learning inference,” arXiv preprint arXiv:2110.02459, 2021.
- A. Djurisic, N. Bozanic, A. Ashok, and R. Liu, “Extremely simple activation shaping for out-of-distribution detection,” 2022. [Online]. Available: https://arxiv.org/abs/2209.09858
- H. Y. Yatbaz, M. Dianati, K. Koufos, and R. Woodman, “Introspection of 2d object detection using processed neural activation patterns in automated driving systems,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, October 2023, pp. 4047–4054.
- Z. Tian, C. Shen, H. Chen, and T. He, “Fcos: Fully convolutional one-stage object detection,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 9627–9636.
- G. Jocher, A. Chaurasia, and J. Qiu, “Ultralytics YOLO,” Jan. 2023. [Online]. Available: https://github.com/ultralytics/ultralytics
- S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017.
- A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
- F. Yu, H. Chen, X. Wang, W. Xian, Y. Chen, F. Liu, V. Madhavan, and T. Darrell, “Bdd100k: A diverse driving dataset for heterogeneous multitask learning,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
- A. Harakeh, M. Smart, and S. L. Waslander, “Bayesod: A bayesian approach for uncertainty estimation in deep object detectors,” in Proceedings - IEEE International Conference on Robotics and Automation, 2020.
- Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in International Conference on Machine Learning. PMLR, 2016, pp. 1050–1059.
- W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International Publishing, 2016, pp. 21–37.
- D. Miller, N. Sünderhauf, M. Milford, and F. Dayoub, “Uncertainty for identifying open-set errors in visual object detection,” IEEE Robotics and Automation Letters, vol. 7, no. 1, pp. 215–222, 2021.
- Y. Wang and D. Wijesekera, “Pixel invisibility: Detecting objects invisible in color images,” arXiv preprint arXiv:2006.08383, 2020.
- X. Du, X. Wang, G. Gozum, and Y. Li, “Unknown-aware object detection: Learning what you don’t know from videos in the wild,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13 678–13 688.
- C. Huang, V. D. Nguyen, V. Abdelzad, C. G. Mannes, L. Rowe, B. Therien, R. Salay, and K. Czarnecki, “Out-of-distribution detection for lidar-based 3d object detection,” in 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), 2022, pp. 4265–4271.
- J. Cen, P. Yun, J. Cai, M. Y. Wang, and M. Liu, “Open-set 3d object detection,” in 2021 International Conference on 3D Vision (3DV), 2021, pp. 869–878.
- Q. M. Rahman, N. Sünderhauf, and F. Dayoub, “Online monitoring of object detection performance post-deployment,” arXiv preprint arXiv:2011.07750, 2020.
- M. S. Ramanagopal, C. Anderson, R. Vasudevan, and M. Johnson-Roberson, “Failing to learn: Autonomously identifying perception failures for self-driving cars,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3860–3867, 2018.
- P. Antonante, D. I. Spivak, and L. Carlone, “Monitoring and diagnosability of perception systems,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 168–175.
- P. Antonante, H. Nilsen, and L. Carlone, “Monitoring of perception systems: Deterministic, probabilistic, and learning-based fault detection and identification,” Artificial Intelligence, p. 103998, 2023.
- J. Hawke, C. Gurău, C. H. Tong, and I. Posner, “Wrong today, right tomorrow: Experience-based classification for robot perception,” in Field and Service Robotics. Springer, 2016, pp. 173–186.
- C. Gurău, C. H. Tong, and I. Posner, “Fit for purpose? predicting perception performance based on past experience,” in International Symposium on Experimental Robotics. Springer, 2016, pp. 454–464.
- C. Gurău, D. Rao, C. H. Tong, and I. Posner, “Learn from experience: Probabilistic prediction of perception performance to avoid failure,” The International Journal of Robotics Research, vol. 37, no. 9, pp. 981–995, 2018.
- T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. Springer, 2014, pp. 740–755.
- M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” International Journal of Computer Vision, vol. 88, no. 2, pp. 303–338, Jun. 2010.
- Autoware Foundation, “Autoware.universe.” [Online]. Available: https://github.com/autowarefoundation/autoware.universe/tree/v0.8.0
- G. King and L. Zeng, “Logistic regression in rare events data,” Political analysis, vol. 9, no. 2, pp. 137–163, 2001.
- W. Liu, X. Wang, J. Owens, and Y. Li, “Energy-based out-of-distribution detection,” Advances in neural information processing systems, vol. 33, pp. 21 464–21 475, 2020.
- T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2999–3007.
- B. Li, G. Baris, P. H. Chan, A. Rahman, and V. Donzella, “Testing ground-truth errors in an automotive dataset for a dnn-based object detector,” in 2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), 2022, pp. 1–6.
- N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in European conference on computer vision. Springer, 2020, pp. 213–229.