Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Inherent Diverse Redundant Safety Mechanisms for AI-based Software Elements in Automotive Applications (2402.08208v2)

Published 13 Feb 2024 in cs.AI

Abstract: This paper explores the role and challenges of AI algorithms, specifically AI-based software elements, in autonomous driving systems. These AI systems are fundamental in executing real-time critical functions in complex and high-dimensional environments. They handle vital tasks like multi-modal perception, cognition, and decision-making tasks such as motion planning, lane keeping, and emergency braking. A primary concern relates to the ability (and necessity) of AI models to generalize beyond their initial training data. This generalization issue becomes evident in real-time scenarios, where models frequently encounter inputs not represented in their training or validation data. In such cases, AI systems must still function effectively despite facing distributional or domain shifts. This paper investigates the risk associated with overconfident AI models in safety-critical applications like autonomous driving. To mitigate these risks, methods for training AI models that help maintain performance without overconfidence are proposed. This involves implementing certainty reporting architectures and ensuring diverse training data. While various distribution-based methods exist to provide safety mechanisms for AI models, there is a noted lack of systematic assessment of these methods, especially in the context of safety-critical automotive applications. Many methods in the literature do not adapt well to the quick response times required in safety-critical edge applications. This paper reviews these methods, discusses their suitability for safety-critical applications, and highlights their strengths and limitations. The paper also proposes potential improvements to enhance the safety and reliability of AI algorithms in autonomous vehicles in the context of rapid and accurate decision-making processes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. S. Rabanser, S. Günnemann, and Z. Lipton, “Failing loudly: An empirical study of methods for detecting dataset shift,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  2. G. Melotti, C. Premebida, J. J. Bird, D. R. Faria, and N. Gonçalves, “Reducing overconfidence predictions in autonomous driving perception,” IEEE Access, vol. 10, pp. 54805–54821, 2022.
  3. Y. Yoon, T. Kim, H. Lee, and J. Park, “Road-aware trajectory prediction for autonomous driving on highways,” Sensors, vol. 20, no. 17, p. 4703, 2020.
  4. G. Kahn, A. Villaflor, V. Pong, P. Abbeel, and S. Levine, “Uncertainty-aware reinforcement learning for collision avoidance,” arXiv preprint arXiv:1702.01182, 2017.
  5. J. Serrà, D. Álvarez, V. Gómez, O. Slizovskaia, J. F. Núñez, and J. Luque, “Input complexity and out-of-distribution detection with likelihood-based generative models,” arXiv preprint arXiv:1909.11480, 2019.
  6. Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in international conference on machine learning, pp. 1050–1059, PMLR, 2016.
  7. A. Mustafa, S. Khan, M. Hayat, R. Goecke, J. Shen, and L. Shao, “Adversarial defense by restricting the hidden space of deep neural networks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3385–3394, 2019.
  8. National Transportation Safety Board, “Collision Between a Sport Utility Vehicle Operating as a Test Vehicle with Automated Driving Systems and a Pedestrian, Tempe, Arizona, March 18, 2018,” HAR NTSB/HAR-19/03 PB2020-100045, National Transportation Safety Board, Washington, DC, 2019. [Online; accessed:10/21/2023].
  9. National Transportation Safety Board, “Collision Between a Car Operating with Automated Vehicle Control Systems and a Tractor-Semitrailer Truck Near Williston, Florida May 7, 2016,” HAR NTSB/HAR-17/02 PB2017-102600, National Transportation Safety Board, Washington, DC, 2017. [Online; accessed:10/21/2023].
  10. M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al., “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316, 2016.
  11. I. D. Raji and J. Buolamwini, “Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products,” in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 429–435, 2019.
  12. G. Melis, C. Dyer, and P. Blunsom, “On the state of the art of evaluation in neural language models,” arXiv preprint arXiv:1707.05589, 2017.
  13. A. Kendall, Y. Gal, and R. Cipolla, “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7482–7491, 2018.
  14. J. Guérin, K. Delmas, R. Ferreira, and J. Guiochet, “Out-of-distribution detection is not all you need,” in Proceedings of the AAAI conference on artificial intelligence, vol. 37, pp. 14829–14837, 2023.
  15. R. S. Ferreira, J. Arlat, J. Guiochet, and H. Waeselynck, “Benchmarking safety monitors for image classifiers with machine learning,” in 2021 IEEE 26th Pacific Rim International Symposium on Dependable Computing (PRDC), pp. 7–16, IEEE, 2021.
  16. S. Mohseni, M. Pitale, V. Singh, and Z. Wang, “Practical solutions for machine learning safety in autonomous vehicles,” arXiv preprint arXiv:1912.09630, 2019.
  17. S. Mohseni, M. Pitale, J. Yadawa, and Z. Wang, “Self-supervised learning for generalizable out-of-distribution detection,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 5216–5223, 2020.
  18. C.-H. Cheng, M. Luttenberger, and R. Yan, “Runtime monitoring dnn-based perception,” arXiv preprint arXiv:2310.03999, 2023.
  19. S. Liang, Y. Li, and R. Srikant, “Enhancing the reliability of out-of-distribution image detection in neural networks,” arXiv preprint arXiv:1706.02690, 2017.
  20. K. Lee, K. Lee, H. Lee, and J. Shin, “A simple unified framework for detecting out-of-distribution samples and adversarial attacks,” in Advances in Neural Information Processing Systems (S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, eds.), vol. 31, Curran Associates, Inc., 2018.
  21. F. T. Liu, K. M. Ting, and Z.-H. Zhou, “Isolation forest,” in 2008 eighth ieee international conference on data mining, pp. 413–422, IEEE, 2008.
  22. S. Luan, Z. Gu, L. B. Freidovich, L. Jiang, and Q. Zhao, “Out-of-distribution detection for deep neural networks with isolation forest and local outlier factor,” IEEE Access, vol. 9, pp. 132980–132989, 2021.
  23. Y. Geifman and R. El-Yaniv, “Selective classification for deep neural networks,” in Advances in Neural Information Processing Systems (I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds.), vol. 30, Curran Associates, Inc., 2017.
  24. M. Abdar, F. Pourpanah, S. Hussain, D. Rezazadegan, L. Liu, M. Ghavamzadeh, P. Fieguth, X. Cao, A. Khosravi, U. R. Acharya, et al., “A review of uncertainty quantification in deep learning: Techniques, applications and challenges,” Information fusion, vol. 76, pp. 243–297, 2021.
  25. A. Mohammed and R. Kora, “A comprehensive review on ensemble deep learning: Opportunities and challenges,” Journal of King Saud University-Computer and Information Sciences, 2023.
  26. L. V. Jospin, H. Laga, F. Boussaid, W. Buntine, and M. Bennamoun, “Hands-on bayesian neural networks—a tutorial for deep learning users,” IEEE Computational Intelligence Magazine, vol. 17, no. 2, pp. 29–48, 2022.
  27. C. Huyen, Designing machine learning systems. ” O’Reilly Media, Inc.”, 2022.
  28. M. Sugiyama and M. Kawanabe, Machine learning in non-stationary environments: Introduction to covariate shift adaptation. MIT press, 2012.
  29. A. Ramdas, S. J. Reddi, B. Póczos, A. Singh, and L. Wasserman, “On the decreasing power of kernel and distance based nonparametric hypothesis tests in high dimensions,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29, 2015.
  30. S. Kulinski, S. Bagchi, and D. I. Inouye, “Feature shift detection: Localizing which features have shifted via conditional distribution tests,” Advances in neural information processing systems, vol. 33, pp. 19523–19533, 2020.
  31. J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff, “On detecting adversarial perturbations,” arXiv preprint arXiv:1702.04267, 2017.
  32. M. Klingner, V. R. Kumar, S. Yogamani, A. Bär, and T. Fingscheidt, “Detecting adversarial perturbations in multi-task perception,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 13050–13057, IEEE, 2022.
  33. O. Bryniarski, N. Hingun, P. Pachuca, V. Wang, and N. Carlini, “Evading adversarial example detection defenses with orthogonal projected gradient descent,” arXiv preprint arXiv:2106.15023, 2021.
  34. A. Goel and P. Moulin, “Fast locally optimal detection of targeted universal adversarial perturbations,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 1757–1770, 2022.
  35. N. Akhtar, J. Liu, and A. Mian, “Defense against universal adversarial perturbations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3389–3398, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Mandar Pitale (2 papers)
  2. Alireza Abbaspour (4 papers)
  3. Devesh Upadhyay (23 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets