Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 100 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 29 tok/s
GPT-5 High 29 tok/s Pro
GPT-4o 103 tok/s
GPT OSS 120B 480 tok/s Pro
Kimi K2 215 tok/s Pro
2000 character limit reached

Argumentative inference in uncertain and inconsistent knowledge bases (1303.1503v1)

Published 6 Mar 2013 in cs.AI

Abstract: This paper presents and discusses several methods for reasoning from inconsistent knowledge bases. A so-called argumentative-consequence relation taking into account the existence of consistent arguments in favor of a conclusion and the absence of consistent arguments in favor of its contrary, is particularly investigated. Flat knowledge bases, i.e. without any priority between their elements, as well as prioritized ones where some elements are considered as more strongly entrenched than others are studied under different consequence relations. Lastly a paraconsistent-like treatment of prioritized knowledge bases is proposed, where both the level of entrenchment and the level of paraconsistency attached to a formula are propagated. The priority levels are handled in the framework of possibility theory.

Citations (187)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

Argumentative Inference in Uncertain and Inconsistent Knowledge Bases

In the field of artificial intelligence, managing knowledge bases that contain inconsistencies and uncertainties is a critical challenge. The paper by Salem Benferhat, Didier Dubois, and Henri Prade titled "Argumentative Inference in Uncertain and Inconsistent Knowledge Bases" provides an in-depth exploration of various methodologies for reasoning within such heterogeneous knowledge repositories. The predominant focus is the development of an argumentative consequence relation which distinguishes itself by evaluating the presence of consistent arguments for a conclusion while ensuring the absence of equally compelling arguments for the opposite. This approach is studied within both flat and prioritized knowledge bases, exploring the implications of entrenchment levels using possibility theory to manage priorities.

Management of Inconsistency and Argumentative Consequence

Traditional methods for dealing with inconsistencies often involve revising the knowledge base to restore consistency, which can lead to the loss of valuable information. Alternatively, coping strategies merely navigate inconsistencies without resolving them by extrapolating useful conclusions despite contradictions. The paper investigates these coping strategies explicitly through the concept of argumentative inference. This approach proposes that a conclusion can be safely inferred from an inconsistent knowledge base only if there is an unequivocal argument supporting it, with no counterarguments of similar strength present.

Comparative Analysis of Consequence Relations

The paper performs a comprehensive comparative analysis of several inconsistency-tolerant consequence relations. Among these:

  • Free-Consequence is conservative, relying only on information that is uninvolved in inconsistencies.
  • MC-Consequence and Lex-Consequence leverage maximal consistent sub-bases, with Lex selecting simplified subsets based on parsimony and others based on lexical ordering.
  • Existential Consequence is noted for its permissiveness, deriving conclusions from any single consistent subset but at a risk of inconsistency.

The authors assert that the argumentative consequence avoids outright contradictions and is theoretically akin to paraconsistent logics that eschew the "ex absurdo quodlibet" rule.

Extension to Prioritized Knowledge Bases

For prioritized knowledge bases, where elements possess varying reliability levels, the paper proposes an advanced treatment by integrating levels of certainty into argumentative inference. This involves assessing arguments for propositions and their negations across different priority layers, ensuring conclusions are drawn from consistently reliable information.

Paraconsistent-Like Reasoning

Paraconsistent reasoning is extended further by attaching dual weights to propositions—reflecting both certainty levels and the degree of negation—transforming the treatment of inconsistency into a more nuanced form. This systemic dual-component approach enriches the understanding of knowledge states and emphasizes local inconsistency rather than presenting a homogeneous assessment across the base.

Practical and Theoretical Implications

The methodologies presented have significant implications for both theoretical advancements and practical applications in AI. They offer a pathway to maintain operational utility of knowledge bases despite inherent contradictions and prioritize information effectively. Future developments may include applying these inference modes within more complex, real-world AI systems, particularly in areas requiring nuanced decision-making and reasoning under uncertainty. Evaluations on default reasoning frameworks could also extend insights into the frameworks' flexibility and robustness.

In conclusion, the investigation led by Benferhat, Dubois, and Prade presents a robust framework that balances conservative reasoning within information systems with advanced methodologies that capitalize on argument structures under uncertainty. As AI continues to grapple with increasingly complex systems and datasets, such strategies will be pivotal in ensuring reliable, consistent, and meaningful inferences in software agents and decision-support systems.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.