A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems
The paper authored by Ingrid Nunes and Dietmar Jannach provides a comprehensive systematic review and introduces a novel taxonomy of explanations in decision support and recommender systems. The primary objective is to consolidate existing research on explanation facilities within these systems, which are paramount for fostering user trust and enhancing the interpretability of the decision-making processes executed by such systems. This review spans a broad scope, analyzing 217 primary studies out of an initial pool of 1209, selected based on stringent inclusion criteria.
Key Findings and Contributions
- Historical Context and Evolution: The paper observes a historical trend where explanations initially focused on rule-based expert systems, evolving towards complex machine learning-based systems, including recommender systems. There is acknowledgment of the transition from simple inference traces to more sophisticated explanation mechanisms aimed at increasing user trust and decision-making effectiveness.
- Explanation Purposes and Types: The review categorizes the purposes of explanations into various types, such as transparency, effectiveness, trust, and persuasiveness. These purposes guide the design of explanation facilities and anchor the paper in understanding the multifaceted objectives that such mechanisms strive to achieve.
- Explanation Content and Presentation: The taxonomy distinguishes between content-based aspects, such as input parameters, knowledge base components, decision-making processes, and decision outputs, and presentation facets that include the format and perspective in which explanations are delivered. It also addresses whether explanations are tailored to context and user requirements.
- Evaluation Methods: The paper highlights a methodological deficiency in the evaluation of explanation techniques, noting a lack of standardized protocols and metrics. Many studies relied on user perception through questionnaires rather than adopting objective measures for evaluating effectiveness or trust.
- Challenges and Open Questions: Nunes and Jannach identify several open research questions including the need for algorithm-independent explanations, understanding the impact of explanation details, and creating responsive explanations tailored to specific user contexts and expertise levels.
Implications for Practice
This paper has theoretical and practical implications. For practitioners, it provides a taxonomy that can guide the design and implementation of explanation facilities in decision support systems, ensuring that they align with specific user and business objectives. Moreover, it underlines the necessity for evaluations that are more robust and grounded in standardized research protocols.
Future Directions
The authors suggest future research should focus on bridging the gap between complex machine learning methods and user-understandable explanations. They also recommend exploring the relationship between different stakeholder objectives, user-perceived quality factors, and explanation purposes, to develop more contextually adaptive and user-centric explanation facilities.
This systematic review and proposed taxonomy serve as a foundation for advancing the field of interpretable artificial intelligence, particularly in the realms of recommender and decision support systems, encouraging future studies to build upon these findings for more effective and trust-inspiring AI applications.