Papers
Topics
Authors
Recent
2000 character limit reached

Leveraging Counterfactual Paths for Contrastive Explanations of POMDP Policies (2403.19760v1)

Published 28 Mar 2024 in cs.AI and cs.HC

Abstract: As humans come to rely on autonomous systems more, ensuring the transparency of such systems is important to their continued adoption. Explainable Artificial Intelligence (XAI) aims to reduce confusion and foster trust in systems by providing explanations of agent behavior. Partially observable Markov decision processes (POMDPs) provide a flexible framework capable of reasoning over transition and state uncertainty, while also being amenable to explanation. This work investigates the use of user-provided counterfactuals to generate contrastive explanations of POMDP policies. Feature expectations are used as a means of contrasting the performance of these policies. We demonstrate our approach in a Search and Rescue (SAR) setting. We analyze and discuss the associated challenges through two case studies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. Pieter Abbeel and Andrew Y. Ng. 2004. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the Twenty-First International Conference on Machine Learning (Banff, Alberta, Canada) (ICML ’04). Association for Computing Machinery, New York, NY, USA, 1. https://doi.org/10.1145/1015330.1015430
  2. Dylan Asmar and Mykel J Kochenderfer. 2022. Collaborative Decision Making Using Action Suggestions. Advances in Neural Information Processing Systems 35 (2022), 33457–33468.
  3. OR Forum—A POMDP approach to personalize mammography screening decisions. Operations Research 60, 5 (2012), 1019–1034.
  4. Ruth M. J. Byrne. 2019. Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. International Joint Conferences on Artificial Intelligence Organization, 6276–6282. https://doi.org/10.24963/ijcai.2019/876
  5. The Emerging Landscape of Explainable Automated Planning & Decision Making. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, Christian Bessiere (Ed.). International Joint Conferences on Artificial Intelligence Organization, 4803–4811. https://doi.org/10.24963/ijcai.2020/669 Survey track.
  6. Jaedeug Choi and Kee-Eung Kim. 2011. Inverse Reinforcement Learning in Partially Observable Environments. Journal of Machine Learning Research 12, 21 (2011), 691–730. http://jmlr.org/papers/v12/choi11a.html
  7. What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794 (2017).
  8. Generating explanations based on Markov decision processes. In MICAI 2009: Advances in Artificial Intelligence: 8th Mexican International Conference on Artificial Intelligence, Guanajuato, México, November 9-13, 2009. Proceedings 8. Springer, 51–62.
  9. Minimal sufficient explanations for factored markov decision processes. In Proceedings of the International Conference on Automated Planning and Scheduling, Vol. 19. 194–200.
  10. Next generation airborne collision avoidance system. Lincoln Laboratory Journal 19, 1 (2012), 17–33.
  11. SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In In Proc. Robotics: Science and Systems.
  12. Autonomous Justification for Enabling Explainable Decision Support in Human-Robot Teaming. Proceedings of Robotics: Science and Systems. Daegu, Republic of Korea. https://doi. org/10.15607/RSS (2023).
  13. Risk-aware shielding of Partially Observable Monte Carlo Planning policies. Artificial Intelligence 324 (2023), 103987. https://doi.org/10.1016/j.artint.2023.103987
  14. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  15. Incorporating Human Path Preferences in Robot Navigation with Minimal Interventions. arXiv preprint arXiv:2303.03530 (2023).
  16. Explainable deep learning: A field guide for the uninitiated. Journal of Artificial Intelligence Research 73 (2022), 329–396.
  17. Human-Centered Autonomy for Autonomous sUAS Target Searching. arXiv preprint arXiv:2309.06395 (2023).
  18. Not all users are the same: Providing personalized explanations for sequential decision making problems. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 6240–6247.
  19. The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems (Singapore, Singapore) (AAMAS ’16). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 997–1005.
  20. Is It My Looks? Or Something I Said? The Impact of Explanations, Embodiment, and Expectations on Trust and Performance in Human-Robot Teams. In Persuasive Technology, Jaap Ham, Evangelos Karapanos, Plinio P. Morita, and Catherine M. Burns (Eds.). Springer International Publishing, Cham, 56–69.
  21. DESPOT: Online POMDP planning with regularization. Journal of Artificial Intelligence Research 58 (2017), 231–266.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.