Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A systematic review and taxonomy of explanations in decision support and recommender systems (2006.08672v1)

Published 15 Jun 2020 in cs.AI and cs.IR

Abstract: With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Ingrid Nunes (16 papers)
  2. Dietmar Jannach (53 papers)
Citations (308)

Summary

A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems

The paper authored by Ingrid Nunes and Dietmar Jannach provides a comprehensive systematic review and introduces a novel taxonomy of explanations in decision support and recommender systems. The primary objective is to consolidate existing research on explanation facilities within these systems, which are paramount for fostering user trust and enhancing the interpretability of the decision-making processes executed by such systems. This review spans a broad scope, analyzing 217 primary studies out of an initial pool of 1209, selected based on stringent inclusion criteria.

Key Findings and Contributions

  1. Historical Context and Evolution: The paper observes a historical trend where explanations initially focused on rule-based expert systems, evolving towards complex machine learning-based systems, including recommender systems. There is acknowledgment of the transition from simple inference traces to more sophisticated explanation mechanisms aimed at increasing user trust and decision-making effectiveness.
  2. Explanation Purposes and Types: The review categorizes the purposes of explanations into various types, such as transparency, effectiveness, trust, and persuasiveness. These purposes guide the design of explanation facilities and anchor the paper in understanding the multifaceted objectives that such mechanisms strive to achieve.
  3. Explanation Content and Presentation: The taxonomy distinguishes between content-based aspects, such as input parameters, knowledge base components, decision-making processes, and decision outputs, and presentation facets that include the format and perspective in which explanations are delivered. It also addresses whether explanations are tailored to context and user requirements.
  4. Evaluation Methods: The paper highlights a methodological deficiency in the evaluation of explanation techniques, noting a lack of standardized protocols and metrics. Many studies relied on user perception through questionnaires rather than adopting objective measures for evaluating effectiveness or trust.
  5. Challenges and Open Questions: Nunes and Jannach identify several open research questions including the need for algorithm-independent explanations, understanding the impact of explanation details, and creating responsive explanations tailored to specific user contexts and expertise levels.

Implications for Practice

This paper has theoretical and practical implications. For practitioners, it provides a taxonomy that can guide the design and implementation of explanation facilities in decision support systems, ensuring that they align with specific user and business objectives. Moreover, it underlines the necessity for evaluations that are more robust and grounded in standardized research protocols.

Future Directions

The authors suggest future research should focus on bridging the gap between complex machine learning methods and user-understandable explanations. They also recommend exploring the relationship between different stakeholder objectives, user-perceived quality factors, and explanation purposes, to develop more contextually adaptive and user-centric explanation facilities.

This systematic review and proposed taxonomy serve as a foundation for advancing the field of interpretable artificial intelligence, particularly in the realms of recommender and decision support systems, encouraging future studies to build upon these findings for more effective and trust-inspiring AI applications.