Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tradeoff-Focused Contrastive Explanation for MDP Planning (2004.12960v2)

Published 27 Apr 2020 in cs.HC and cs.AI

Abstract: End-users' trust in automated agents is important as automated decision-making and planning is increasingly used in many aspects of people's lives. In real-world applications of planning, multiple optimization objectives are often involved. Thus, planning agents' decisions can involve complex tradeoffs among competing objectives. It can be difficult for the end-users to understand why an agent decides on a particular planning solution on the basis of its objective values. As a result, the users may not know whether the agent is making the right decisions, and may lack trust in it. In this work, we contribute an approach, based on contrastive explanation, that enables a multi-objective MDP planning agent to explain its decisions in a way that communicates its tradeoff rationale in terms of the domain-level concepts. We conduct a human subjects experiment to evaluate the effectiveness of our explanation approach in a mobile robot navigation domain. The results show that our approach significantly improves the users' understanding, and confidence in their understanding, of the tradeoff rationale of the planning agent.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Roykrong Sukkerd (1 paper)
  2. Reid Simmons (18 papers)
  3. David Garlan (22 papers)
Citations (27)

Summary

Tradeoff-Focused Contrastive Explanation for MDP Planning

The paper, "Tradeoff-Focused Contrastive Explanation for MDP Planning", presents a novel approach focused on enhancing trust in automated agents through improved explanation of their decision-making processes. As automated planning systems grow in complexity and become more prevalent in various applications, understanding the rationale behind their decisions, especially when they involve multiple conflicting objectives, is crucial. This paper targets the issue of trust by allowing a multi-objective Markov Decision Process (MDP) planning agent to offer contrastive explanations in terms that users can relate to domain-specific concepts.

Contributions

The authors make several key contributions:

  1. Explainable Representation for MDP: The paper introduces an extension to the standard factored MDP representation. It incorporates quality-attribute semantics, allowing the representation to retain the underlying factors impacting objective functions, thus enabling the generation of human-interpretable explanations.
  2. Contrastive Explanation Methodology: They propose a method for explaining tradeoffs in MDP planning via contrastive explanations. This involves contrasting the elected solution against several Pareto-optimal alternatives to highlight the tradeoffs made by the agent's policy.
  3. Empirical Evaluation: The approach was empirically validated through a human subjects experiment in a mobile robot navigation domain. Findings indicated that using these explanations markedly improved users' understanding and confidence in the planning agent's tradeoff rationale.

Numerical Results and User Study Insights

The authors conducted a user paper to test the effectiveness of their approach, employing scenarios wherein a robot's chosen navigation path must be evaluated against user preferences across multiple criteria like travel time, collision avoidance, and intrusiveness. The paper involved both control and treatment groups, the latter receiving contrastive explanations. Results showed that participants who received explanations had a significantly higher probability of making correct decisions and exhibited higher confidence levels. Specifically, the odds of correctly assessing the robot's decision in the presence of explanations increased by a factor of 3.8.

Implications

The paper's implications are quite relevant in fields where automated systems operate in complex multi-objective environments. By enabling more transparent agents, this approach can significantly enhance user trust, which is vital for systems integrated into daily life activities. Moreover, understanding tradeoffs can aid in better alignment between a system’s operational objectives and user expectations.

Future Directions

The paper sets a foundation for future research into more sophisticated explanation systems. Potential areas include extending the framework for real-time adaptation of explanations as user preferences evolve and fine-tuning the balance between detail and abstraction in explanations to cater to diverse audiences. Additionally, integration with preference learning systems could further empower users to guide agent behavior interactively.

In conclusion, the proposed framework for tradeoff-focused explanations represents a substantial step towards making complex planning systems more intelligible and trustworthy, set in a well-structured academic context with robust empirical validation. This work promises a better understanding of multi-objective planning processes with practical applications in various domains of AI.

Youtube Logo Streamline Icon: https://streamlinehq.com