Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
60 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Explainable Object-induced Model through Uncertainty for Automated Vehicles (2402.15572v1)

Published 23 Feb 2024 in cs.AI, cs.CV, and cs.RO

Abstract: The rapid evolution of automated vehicles (AVs) has the potential to provide safer, more efficient, and comfortable travel options. However, these systems face challenges regarding reliability in complex driving scenarios. Recent explainable AV architectures neglect crucial information related to inherent uncertainties while providing explanations for actions. To overcome such challenges, our study builds upon the "object-induced" model approach that prioritizes the role of objects in scenes for decision-making and integrates uncertainty assessment into the decision-making process using an evidential deep learning paradigm with a Beta prior. Additionally, we explore several advanced training strategies guided by uncertainty, including uncertainty-guided data reweighting and augmentation. Leveraging the BDD-OIA dataset, our findings underscore that the model, through these enhancements, not only offers a clearer comprehension of AV decisions and their underlying reasoning but also surpasses existing baselines across a broad range of scenarios.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Explaining Autonomous Driving Actions with Visual Question Answering. In Proceedings of the 2023 IEEE International Conference on Intelligent Transportation Systems (IEEE ITSC-2023). https://doi.org/10.48550/arXiv.2307.10408 Accepted.
  2. Driving behavior explanation with multi-level fusion. Pattern Recognition 123 (2022), 108421. https://doi.org/10.1016/j.patcog.2021.108421
  3. Effects of Semantic Segmentation Visualization on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 155, 11 pages. https://doi.org/10.1145/3411764.3445351
  4. Effects of Scene Detection, Scene Prediction, and Maneuver Planning Visualizations on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 2, Article 49 (Jul 2022), 21 pages. https://doi.org/10.1145/3534609
  5. Explaining Autonomous Driving by Learning End-to-End Visual Attention. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1389–1398. https://doi.org/10.1109/CVPRW50498.2020.00178
  6. Look who’s talking now: Implications of AV’s explanations on driver’s trust, AV preference, anxiety and mental workload. Transportation Research Part C: Emerging Technologies 104 (2019), 428–442. https://doi.org/10.1016/j.trc.2019.05.025
  7. Sentiment analysis: Bayesian Ensemble Learning. Decision Support Systems 68 (2014), 26–38. https://doi.org/10.1016/j.dss.2014.10.004
  8. Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of The 33rd International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 48), Maria Florina Balcan and Kilian Q. Weinberger (Eds.). PMLR, New York, New York, USA, 1050–1059. https://proceedings.mlr.press/v48/gal16.html
  9. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770–778. https://doi.org/10.1109/CVPR.2016.90
  10. The MBPEP: a deep ensemble pruning algorithm providing high quality uncertainty prediction. Applied Intelligence 49 (2019), 2942–2955. https://doi.org/10.1007/s10489-019-01421-8
  11. Eyke Hüllermeier and Willem Waegeman. 2021. Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods. Machine Learning 110 (2021), 457–506. https://doi.org/10.1007/s10994-021-05946-3
  12. To Trust Or Not To Trust A Classifier. 31 (2018). https://proceedings.neurips.cc/paper/2018/file/7180cffd6a8e829dacfc2a31b3f72ece-Paper.pdf
  13. Jinkyu Kim and John Canny. 2017. Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV). 2961–2969. https://doi.org/10.1109/ICCV.2017.320
  14. Textual Explanations for Self-Driving Vehicles. In Computer Vision – ECCV 2018. Springer International Publishing, Cham, 577–593. https://doi.org/10.1007/978-3-030-01216-8_35
  15. Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. International Journal of Interactive Design and Manufacturing (IJIDeM) 9 (2015), 269–275. https://doi.org/10.1007/s12008-014-0227-2
  16. Ideal-observer computation in medical imaging with use of Markov-chain Monte Carlo techniques. Journal of the Optical Society of America A 20, 3 (Mar 2003), 430–438. https://doi.org/10.1364/JOSAA.20.000430
  17. Why Not Explain? Effects of Explanations on Human Perceptions of Autonomous Driving. In Proceedings of the 2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO). 194–199. https://doi.org/10.1109/ARSO51874.2021.9542835
  18. Towards Accountability: Providing Intelligible Explanations in Autonomous Driving. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV). 231–237. https://doi.org/10.1109/IV48863.2021.9575917
  19. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 6, 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031
  20. Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1 (2019), 206–215. https://doi.org/10.1038/s42256-019-0048-x
  21. Evidential Deep Learning to Quantify Classification Uncertainty. 31 (2018). https://proceedings.neurips.cc/paper/2018/file/a981f2b708044d6fb4a71a1463242520-Paper.pdf
  22. To Explain or Not to Explain: A Study on the Necessity of Explanations for Autonomous Vehicles. In Proceedings of the NeurIPS 2022 Progress and Challenges in Building Trustworthy Embodied AI Workshop (TEA 2022). https://doi.org/10.48550/arXiv.2006.11684 Won Best Paper Award at NeurIPS 2022 Progress and Challenges in Building Trustworthy Embodied AI Workshop (TEA 2022).
  23. Connor Shorten and Taghi M. Khoshgoftaar. 2019. A survey on Image Data Augmentation for Deep Learning. Journal of Big Data 6 (2019), 60. https://doi.org/10.1186/s40537-019-0197-0
  24. The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). PMLR, 9289–9299. https://proceedings.mlr.press/v119/swiatkowski20a.html
  25. Theodoros Tsiligkaridis. 2021. Information Aware max-norm Dirichlet networks for predictive uncertainty estimation. Neural Networks 135 (2021), 105–114. https://doi.org/10.1016/j.neunet.2020.12.011
  26. Deep Object-Centric Policies for Autonomous Driving. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA). 8853–8859. https://doi.org/10.1109/ICRA.2019.8794224
  27. End-to-End Learning of Driving Models from Large-Scale Video Datasets. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3530–3538. https://doi.org/10.1109/CVPR.2017.376
  28. Explainable Object-Induced Action Decision for Autonomous Vehicles. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 9520–9529. https://doi.org/10.1109/CVPR42600.2020.00954
  29. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2633–2642. https://doi.org/10.1109/CVPR42600.2020.00271
  30. Explainability of Deep Vision-Based Autonomous Driving Systems: Review and Challenges. International Journal of Computer Vision 130 (2022), 2425–2452. https://doi.org/10.1007/s11263-022-01657-x

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com