Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Altruist: Argumentative Explanations through Local Interpretations of Predictive Models (2010.07650v2)

Published 15 Oct 2020 in cs.LG, cs.AI, and cs.LO

Abstract: Explainable AI is an emerging field providing solutions for acquiring insights into automated systems' rationale. It has been put on the AI map by suggesting ways to tackle key ethical and societal issues. Existing explanation techniques are often not comprehensible to the end user. Lack of evaluation and selection criteria also makes it difficult for the end user to choose the most suitable technique. In this study, we combine logic-based argumentation with Interpretable Machine Learning, introducing a preliminary meta-explanation methodology that identifies the truthful parts of feature importance oriented interpretations. This approach, in addition to being used as a meta-explanation technique, can be used as an evaluation or selection tool for multiple feature importance techniques. Experimentation strongly indicates that an ensemble of multiple interpretation techniques yields considerably more truthful explanations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ioannis Mollas (12 papers)
  2. Nick Bassiliades (14 papers)
  3. Grigorios Tsoumakas (50 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.