Argumentative Agentic Models for CBR
- The paper presents AAM-CBR as a framework that integrates abstract argumentation with neural language model-based agentic modules for precise, explainable case-based reasoning.
- Its methodology leverages modular coverage, dynamic factor extraction, and multi-agent dialogue protocols to construct dispute-tree style explanations.
- Empirical evaluations demonstrate that AAM-CBR outperforms single-prompt baselines in high-factor scenarios, ensuring robust predictions in complex domains.
Argumentative Agentic Models for Case-Based Reasoning (AAM-CBR) are a class of frameworks for interpretable, precise, and explainable classification and prediction that integrate abstract argumentation semantics, multi-agent principles, and, increasingly, neural LLMs to operationalize case-based reasoning even when the structure and factorization of precedent cases is unobserved. These systems leverage modular agentic architectures, where each agent embodies a case (or case subset) and mediates its contribution to case retrieval, factor extraction, and attack construction in an argumentation network. The resulting dynamics support robust prediction and rich, dispute-tree-style explanations in domains marked by complex, evolving, and even inconsistent legal or factual precedents (Fungwacharakorn et al., 14 Dec 2025, Paulino-Passos et al., 2023, Fungwacharakorn et al., 22 Oct 2025).
1. Formal Foundations: Abstract Argumentation and Case-Based Reasoning
AAM-CBR directly builds on Abstract Argumentation for Case-Based Reasoning (AA-CBR), itself rooted in Dung-style argumentation frameworks. Let denote a finite set of “factors,” and the corresponding set of situations. Cases are pairs with binary outcomes . The case base is an outcome-consistent set of such pairs (Fungwacharakorn et al., 14 Dec 2025).
AA-CBR constructs an argumentation framework , with arguments for new query and designated default outcome . Attack relations are defined by outcome disagreement and strict factor-set inclusion. The grounded extension of determines the system’s prediction; specifically, implies prediction ; otherwise, is returned (Fungwacharakorn et al., 14 Dec 2025, Paulino-Passos et al., 2021).
AAM-CBR generalizes this formalism by embedding agentic modules that determine factor coverage and extraction dynamically, even when the underlying case base consists of unstructured textual descriptions rather than explicit factor sets.
2. Agentic Extension: Modular Coverage and Factor Extraction
Unlike classical AA-CBR, which assumes an explicitly factorized case base, AAM-CBR operates on unprocessed textual cases , each annotated with an outcome but lacking explicit factor-set annotation (Fungwacharakorn et al., 14 Dec 2025). Each previous case is equipped with two specialized modules (typically instantiated as LLM prompts):
For a new, factorized query , agent determines whether is relevant (coverage) and, if so, the precise intersection of factors it covers (extraction). The agent returns if relevant, or refuses to contribute otherwise. The effective, factorized case base for the query becomes , and the AA-CBR algorithm proceeds as before.
This meta-argumentation approach couples symbolic semantics with neural coverage/extraction, enabling privacy—irrelevant cases reveal no information—and extensibility, as new agents may handle new cases without global pre-processing (Fungwacharakorn et al., 14 Dec 2025).
3. Multi-Agent and Dialogue Protocols
Extending single-agent AAM-CBR systems to multi-agent settings introduces inter-agent dialogue and consensus mechanics (Paulino-Passos et al., 2023). In such setups, each agent maintains a local case base and relevance model . The dialogic protocol comprises:
- Retrieval: Each agent proposes relevant candidate cases via .
- Attack Declaration: Agents broadcast attack relations among candidates, according to their local criteria.
- Defense & Rebuttal: Opponents may contest attacks by presenting more specific cases or challenging relevance.
- Aggregation: The system aggregates local argumentation frameworks into a global AF and computes a joint grounded extension for collective outcome determination.
This multi-agent design admits both consensus models (all agents agree) and dispute resolution strategies (e.g., weighted trust, meta-argumentation over relevance models). Local relevance functions may be adapted online: an agent whose attack is regularly overruled may update its local model accordingly (Paulino-Passos et al., 2023).
4. Symbolic–Neural Integration and Algorithmic Structure
AAM-CBR is notable for its hybridization of symbolic and neural reasoning components:
- The symbolic backbone is provided by Dung-style abstraction: grounded extension algorithms govern the ultimate outcome based on attacks constructed from agentic inputs.
- Neural LLMs function as “subagents” for previous cases, invoked only for coverage and extraction during new-case evaluation (Fungwacharakorn et al., 14 Dec 2025).
The full computational pipeline, in the agentic textual scenario, is:
1 2 3 4 5 6 7 |
for i in 1..k: if coverage_i(N, d_i) == YES: F_i = extract_i(N, d_i) Γ′ = Γ′ ∪ { (F_i, o_i) } Construct AA-framework (𝒜,↝) for Γ′ ∪ {(N,?)} ∪ { (∅,o_d) } G = grounded_extension(𝒜,↝) return outcome: (∅,o_d) ∈ G ? o_d : ō_d |
This structure ensures modularity and privacy: only relevant cases participate, and the symbolic AA layer adjudicates the global result.
5. Empirical Evaluation and Practical Performance
Empirical studies on synthetic credit-evaluation datasets reveal a distinctive regime transition in AAM-CBR’s relative performance. For smaller factor-sets (), prompt-based direct LLM inference can outpace AAM-CBR, presumably because error accumulation in case coverage/extraction is limited. When new queries are rich in factors (), AAM-CBR’s explicit semantics and modular decomposition yield superior accuracy: for , AAM-CBR achieves $1.00$ accuracy (default ) and $0.96$ (default ) on Gemini-Lite, while single-prompt baselines plateau at (Fungwacharakorn et al., 14 Dec 2025).
Case coverage and extraction accuracy by LLM agents improves with larger , reaching for extraction when . Both Gemini-Lite and GPT-4o exhibit this trend, with GPT-4o slightly outperforming on extraction. This phase transition substantiates the value of structured argumentation in factor-rich, complex domains.
6. Robustness, Monotonicity, and Conflict Management
Original AA-CBR and its extensions can exhibit failures of cautious monotonicity: the addition of new cases (even those entailed by prior labels) may alter predictions non-monotonically, as shown via counterexamples (Paulino-Passos et al., 2021). This is addressed in the cautiously monotonic variant, cAA-CBR, which restricts attention to the unique concise subset of “surprising and sufficient” cases, ensuring closure under cautious monotonicity, cumulativity, and rational monotonicity.
When extending AAM-CBR to generalized reason models incorporating inconsistent precedents, the derivation state argumentation (DSA) framework constructs argumentation graphs where derivation states are tracked over all partial fact subsets. Attack relations are determined by state changes on strict subset relations, yielding unique grounded extensions and supporting fine-grained, dispute-tree explanations even in the presence of conflict and inconsistency (Fungwacharakorn et al., 22 Oct 2025).
7. Limitations and Future Directions
AAM-CBR frameworks show limitations on sparse-fact queries, where LLM-driven coverage/extraction noise is magnified. Cost scales linearly with the number of previous cases due to agent invocation, motivating schemes for agent sharing, retrieval-augmented compression, or symbolic knowledge-graph integration. Automatic discovery of unanticipated “new” factors in the case base remains unsolved. Future research directions include agent feedback loops for iterative learning, hierarchical AA frameworks for very large case bases, and tighter coupling with symbolic ontologies to boost extraction reliability (Fungwacharakorn et al., 14 Dec 2025).
References
- "Argumentative Reasoning with LLMs on Non-factorized Case Bases" (Fungwacharakorn et al., 14 Dec 2025)
- "Technical Report on the Learning of Case Relevance in Case-Based Reasoning with Abstract Argumentation" (Paulino-Passos et al., 2023)
- "Monotonicity and Noise-Tolerance in Case-Based Reasoning with Abstract Argumentation (with Appendix)" (Paulino-Passos et al., 2021)
- "An Argumentative Explanation Framework for Generalized Reason Model with Inconsistent Precedents" (Fungwacharakorn et al., 22 Oct 2025)