Papers
Topics
Authors
Recent
Search
2000 character limit reached

Applying Attribution Explanations in Truth-Discovery Quantitative Bipolar Argumentation Frameworks

Published 9 Sep 2024 in cs.AI | (2409.05831v1)

Abstract: Explaining the strength of arguments under gradual semantics is receiving increasing attention. For example, various studies in the literature offer explanations by computing the attribution scores of arguments or edges in Quantitative Bipolar Argumentation Frameworks (QBAFs). These explanations, known as Argument Attribution Explanations (AAEs) and Relation Attribution Explanations (RAEs), commonly employ removal-based and Shapley-based techniques for computing the attribution scores. While AAEs and RAEs have proven useful in several applications with acyclic QBAFs, they remain largely unexplored for cyclic QBAFs. Furthermore, existing applications tend to focus solely on either AAEs or RAEs, but do not compare them directly. In this paper, we apply both AAEs and RAEs, to Truth Discovery QBAFs (TD-QBAFs), which assess the trustworthiness of sources (e.g., websites) and their claims (e.g., the severity of a virus), and feature complex cycles. We find that both AAEs and RAEs can provide interesting explanations and can give non-trivial and surprising insights.

Summary

  • The paper pioneers applying dual attribution explanation methods in cyclic TD-QBAFs to identify influential arguments and relationships.
  • It employs removal-based and Shapley-based techniques to quantitatively assess the impact of individual arguments and their interactions.
  • Key findings demonstrate that both attribution methods reliably reveal non-trivial argument influences, enhancing transparency in AI decision-making.

Application of Attribution Explanations in Cyclic Truth-Discovery QBAFs

The paper entitled "Applying Attribution Explanations in Truth-Discovery Quantitative Bipolar Argumentation Frameworks" addresses the challenge of making argument strength within Quantitative Bipolar Argumentation Frameworks (QBAFs) more interpretable through attribution explanations. The research introduces methods to expand the understanding of argument contribution, particularly within cyclic Truth Discovery QBAFs (TD-QBAFs).

Background

Argumentation frameworks have been recognized as pivotal for enhancing explainability in AI, allowing for transparent reasoning in scenarios involving conflicting information. QBAFs, a notable extension of traditional argumentation frameworks, evaluate the dialectical strength of arguments using support and attack relations. The focal point of this paper is TD-QBAFs, which are designed to assess the trustworthiness of information reported by diverse sources, and are characterized by their cyclic nature.

Methodology

The paper explores two central forms of attribution explanations: Argument Attribution Explanations (AAEs) and Relation Attribution Explanations (RAEs), applying them to cyclic QBAFs of truth discovery networks. It specifically utilizes removal-based and Shapley-based techniques for computing these attribution scores within TD-QBAFs. The study highlights a fundamental understanding: while AAEs assign impact scores to specific arguments within a topic, RAEs focus on the impact of specific relational edges between arguments.

For cyclic TD-QBAFs, a scenario is chosen where both forms of attribution explanations are applied to compute the contribution of different nodes and edges. The research employs a practical approximation for Shapley-based scores due to the computational complexities presented by the numerous potential subsets within large networks.

Key Findings

The results presented demonstrate that both AAEs and RAEs can effectively identify arguments and relations that significantly influence the outcomes in TD-QBAFs. The study reveals detailed insights into argument interactions, identifying non-trivial influences that may not be apparent at first glance. Notably, removal-based and Shapley-based explanations provided consistent yet distinct perspectives, affirming their utility under varied cognitive scales.

Implications and Future Directions

This comprehensive analysis of AAEs and RAEs in the context of TD-QBAFs sheds light on the utility of these methods for enhancing the interpretability of AI-driven decision-making processes. By elucidating the primary contributing factors in the network, this research enables practitioners to better understand complex argumentative interactions, thus fostering improved trust in AI systems. The study proposes a promising framework for further analysis, particularly recommending future exploration into varying QBAF semantics and their effects on attribution explanations.

The implications for practical applications are substantial, as this method can potentially be adapted for use in domains where transparency and explanation are paramount. In conclusion, the paper offers significant advancements in explainable AI through QBAFs in complex networks, paving the way for further exploration into theoretically rich and practically impactful areas of AI interpretability.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.