Papers
Topics
Authors
Recent
2000 character limit reached

Argumentative Agentic Models for CBR

Updated 21 December 2025
  • The paper presents AAM-CBR as a framework that integrates abstract argumentation with neural language model-based agentic modules for precise, explainable case-based reasoning.
  • Its methodology leverages modular coverage, dynamic factor extraction, and multi-agent dialogue protocols to construct dispute-tree style explanations.
  • Empirical evaluations demonstrate that AAM-CBR outperforms single-prompt baselines in high-factor scenarios, ensuring robust predictions in complex domains.

Argumentative Agentic Models for Case-Based Reasoning (AAM-CBR) are a class of frameworks for interpretable, precise, and explainable classification and prediction that integrate abstract argumentation semantics, multi-agent principles, and, increasingly, neural LLMs to operationalize case-based reasoning even when the structure and factorization of precedent cases is unobserved. These systems leverage modular agentic architectures, where each agent embodies a case (or case subset) and mediates its contribution to case retrieval, factor extraction, and attack construction in an argumentation network. The resulting dynamics support robust prediction and rich, dispute-tree-style explanations in domains marked by complex, evolving, and even inconsistent legal or factual precedents (Fungwacharakorn et al., 14 Dec 2025, Paulino-Passos et al., 2023, Fungwacharakorn et al., 22 Oct 2025).

1. Formal Foundations: Abstract Argumentation and Case-Based Reasoning

AAM-CBR directly builds on Abstract Argumentation for Case-Based Reasoning (AA-CBR), itself rooted in Dung-style argumentation frameworks. Let F\mathcal{F} denote a finite set of “factors,” and 2F2^{\mathcal{F}} the corresponding set of situations. Cases are pairs (X,oX)2F×O(X, o_X) \in 2^{\mathcal{F}} \times \mathcal{O} with binary outcomes oX{0,1}o_X \in \{0, 1\}. The case base Γ\Gamma is an outcome-consistent set of such pairs (Fungwacharakorn et al., 14 Dec 2025).

AA-CBR constructs an argumentation framework (A,)(\mathcal{A}, \rightsquigarrow), with arguments A=Γ{(N,?)}{(,od)}\mathcal{A} = \Gamma \cup \{(N, ?)\} \cup \{(\varnothing, o_d)\} for new query NFN \subseteq \mathcal{F} and designated default outcome odo_d. Attack relations are defined by outcome disagreement and strict factor-set inclusion. The grounded extension GG of (A,)(\mathcal{A}, \rightsquigarrow) determines the system’s prediction; specifically, (,od)G(\varnothing, o_d) \in G implies prediction odo_d; otherwise, od\overline{o}_d is returned (Fungwacharakorn et al., 14 Dec 2025, Paulino-Passos et al., 2021).

AAM-CBR generalizes this formalism by embedding agentic modules that determine factor coverage and extraction dynamically, even when the underlying case base consists of unstructured textual descriptions rather than explicit factor sets.

2. Agentic Extension: Modular Coverage and Factor Extraction

Unlike classical AA-CBR, which assumes an explicitly factorized case base, AAM-CBR operates on unprocessed textual cases D={d1,,dk}D = \{d_1, \ldots, d_k\}, each annotated with an outcome oio_i but lacking explicit factor-set annotation (Fungwacharakorn et al., 14 Dec 2025). Each previous case did_i is equipped with two specialized modules (typically instantiated as LLM prompts):

  • coveragei(N,di):2F×Text{YES,NO}coverage_i(N, d_i): 2^{\mathcal{F}} \times \mathrm{Text} \rightarrow \{\mathrm{YES}, \mathrm{NO}\}
  • extracti(N,di):2F×Text2Fextract_i(N, d_i): 2^{\mathcal{F}} \times \mathrm{Text} \rightarrow 2^{\mathcal{F}}

For a new, factorized query NFN \subseteq \mathcal{F}, agent ii determines whether did_i is relevant (coverage) and, if so, the precise intersection FiNF_i \subseteq N of factors it covers (extraction). The agent returns FiF_i if relevant, or refuses to contribute otherwise. The effective, factorized case base for the query becomes Γ={(Fi,oi)coveragei(N,di)=YES}\Gamma' = \{(F_i, o_i) \mid coverage_i(N, d_i)=\mathrm{YES}\}, and the AA-CBR algorithm proceeds as before.

This meta-argumentation approach couples symbolic semantics with neural coverage/extraction, enabling privacy—irrelevant cases reveal no information—and extensibility, as new agents may handle new cases without global pre-processing (Fungwacharakorn et al., 14 Dec 2025).

3. Multi-Agent and Dialogue Protocols

Extending single-agent AAM-CBR systems to multi-agent settings introduces inter-agent dialogue and consensus mechanics (Paulino-Passos et al., 2023). In such setups, each agent AiA_i maintains a local case base CiC_i and relevance model RiR_i. The dialogic protocol comprises:

  • Retrieval: Each agent proposes relevant candidate cases via RiR_i.
  • Attack Declaration: Agents broadcast attack relations among candidates, according to their local criteria.
  • Defense & Rebuttal: Opponents may contest attacks by presenting more specific cases or challenging relevance.
  • Aggregation: The system aggregates local argumentation frameworks into a global AF and computes a joint grounded extension for collective outcome determination.

This multi-agent design admits both consensus models (all agents agree) and dispute resolution strategies (e.g., weighted trust, meta-argumentation over relevance models). Local relevance functions may be adapted online: an agent whose attack is regularly overruled may update its local model accordingly (Paulino-Passos et al., 2023).

4. Symbolic–Neural Integration and Algorithmic Structure

AAM-CBR is notable for its hybridization of symbolic and neural reasoning components:

  • The symbolic backbone is provided by Dung-style abstraction: grounded extension algorithms govern the ultimate outcome based on attacks constructed from agentic inputs.
  • Neural LLMs function as “subagents” for previous cases, invoked only for coverage and extraction during new-case evaluation (Fungwacharakorn et al., 14 Dec 2025).

The full computational pipeline, in the agentic textual scenario, is:

1
2
3
4
5
6
7
for i in 1..k:
    if coverage_i(N, d_i) == YES:
        F_i = extract_i(N, d_i)
        Γ = Γ  { (F_i, o_i) }
Construct AA-framework (𝒜,) for Γ  {(N,?)}  { (,o_d) }
G = grounded_extension(𝒜,)
return outcome: (,o_d)  G ? o_d : ō_d
(Fungwacharakorn et al., 14 Dec 2025)

This structure ensures modularity and privacy: only relevant cases participate, and the symbolic AA layer adjudicates the global result.

5. Empirical Evaluation and Practical Performance

Empirical studies on synthetic credit-evaluation datasets reveal a distinctive regime transition in AAM-CBR’s relative performance. For smaller factor-sets (n7n \leq 7), prompt-based direct LLM inference can outpace AAM-CBR, presumably because error accumulation in case coverage/extraction is limited. When new queries are rich in factors (n8n \geq 8), AAM-CBR’s explicit semantics and modular decomposition yield superior accuracy: for n=10n=10, AAM-CBR achieves $1.00$ accuracy (default =0=0) and $0.96$ (default =1=1) on Gemini-Lite, while single-prompt baselines plateau at 0.9\leq 0.9 (Fungwacharakorn et al., 14 Dec 2025).

Case coverage and extraction accuracy by LLM agents improves with larger nn, reaching >96%>96\% for extraction when n=10n=10. Both Gemini-Lite and GPT-4o exhibit this trend, with GPT-4o slightly outperforming on extraction. This phase transition substantiates the value of structured argumentation in factor-rich, complex domains.

6. Robustness, Monotonicity, and Conflict Management

Original AA-CBR and its extensions can exhibit failures of cautious monotonicity: the addition of new cases (even those entailed by prior labels) may alter predictions non-monotonically, as shown via counterexamples (Paulino-Passos et al., 2021). This is addressed in the cautiously monotonic variant, cAA-CBR, which restricts attention to the unique concise subset of “surprising and sufficient” cases, ensuring closure under cautious monotonicity, cumulativity, and rational monotonicity.

When extending AAM-CBR to generalized reason models incorporating inconsistent precedents, the derivation state argumentation (DSA) framework constructs argumentation graphs where derivation states are tracked over all partial fact subsets. Attack relations are determined by state changes on strict subset relations, yielding unique grounded extensions and supporting fine-grained, dispute-tree explanations even in the presence of conflict and inconsistency (Fungwacharakorn et al., 22 Oct 2025).

7. Limitations and Future Directions

AAM-CBR frameworks show limitations on sparse-fact queries, where LLM-driven coverage/extraction noise is magnified. Cost scales linearly with the number of previous cases due to agent invocation, motivating schemes for agent sharing, retrieval-augmented compression, or symbolic knowledge-graph integration. Automatic discovery of unanticipated “new” factors in the case base remains unsolved. Future research directions include agent feedback loops for iterative learning, hierarchical AA frameworks for very large case bases, and tighter coupling with symbolic ontologies to boost extraction reliability (Fungwacharakorn et al., 14 Dec 2025).


References

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Argumentative Agentic Models for Case-Based Reasoning (AAM-CBR).