Taxonomy of Explanations
- Taxonomy of Explanations is a structured system that categorizes explanation types, mechanisms, and evaluation criteria in AI, software, and scientific domains.
- It organizes explanations along cognitive, mathematical, and contextual dimensions to align technical methods with user needs.
- The framework informs rigorous evaluation and design by mapping explanation methods to specific stages in model pipelines and stakeholder contexts.
A taxonomy of explanations is a structured classification system that organizes the types, purposes, mechanisms, and evaluation criteria for explanations, particularly in complex AI, software, and scientific systems. Such taxonomies enable rigorous comparison, principled development, and targeted deployment of explanation methods by aligning them with cognitive, mathematical, contextual, and operational dimensions. The goal is to clarify ambiguities around the term “explanation,” bridge gaps across user needs and technical properties, and inform the design and evaluation of explainable systems at multiple abstraction levels.
1. Dimensions and Axes for Taxonomizing Explanations
Taxonomies in the literature are predominantly multidimensional, relying on orthogonal axes that reflect both technical underpinnings and stakeholder contexts. Principal axes include:
- Cognitive Depth (derived from Bloom’s taxonomy): Ranges from basic recall (“Remember”) through understanding, application, analysis, evaluation, and creation. This progression supports step-wise adaptation of explanations to user knowledge in explainable AI, particularly for counterfactual explanations (Suffian et al., 2022).
- Mathematical Structure: Classifies explanations by their representation in feature space, distinguishing decision traces (original basis), decision approximations (projected subspace), and decision interpretations (mapped coordinates) for local explanations (Nomm, 2023).
- Pipeline Phase: For probabilistic models such as Bayesian networks, explanation types emerge at levels of model structure, reasoning/inference steps, evidence scenarios, and decisions/confidence (Derks et al., 2021).
- Reference Frame and Granularity: For visual explanation, the Reference-Frame × Granularity taxonomy organizes explanations along pointwise/contrastive and class/group axes (e.g., “Why this class?” vs. “Why this group and not that group?”), with implications for saliency alignment with user queries (Elisha et al., 17 Nov 2025).
- Stakeholder and Purpose: Social and mechanistic axes—such as mechanistic vs. social and particular vs. general—yield classes such as diagnostic, explication, expectation, and role explanations, each tied to different evaluative criteria and intervention scales (Yao, 2021).
- Information Flow and Modality: Differentiates data-based, model-based, and post-hoc explanations; distinguishes intrinsic (built-in) and post-hoc (externally derived) mechanisms; establishes local vs. global scope (Arya et al., 2019, Gilpin et al., 2018, Nunes et al., 2020).
- User-Expressed Needs: Captures explainability requirements via primary/secondary user concerns revealed in app reviews or user studies, including training, interaction, business, dissatisfaction, and errata (Unterbusch et al., 2023, Droste et al., 2024, Sadeghi et al., 2021).
2. Representative Taxonomy Schemes
The following table illustrates major representative taxonomies from the literature, their primary axes, and classification categories.
| Reference | Principal Axes / Levels | Key Categories |
|---|---|---|
| (Suffian et al., 2022) | Bloom’s cognitive levels (6-tiered) | Remember, Understand, Apply, Analyze, Evaluate, Create |
| (Nomm, 2023) | Linear algebraic operator (identity, projection, mapping) | Decision Trace, Decision Approximation, Decision Interpretation |
| (Derks et al., 2021) | BN pipeline: model, reasoning, evidence, decision | Model Structure, Reasoning, Evidence, Decision |
| (Yao, 2021) | Mechanistic/Social × Particular/General | Diagnostic, Explication, Expectation, Role |
| (Elisha et al., 17 Nov 2025) | Reference-Frame × Granularity | Pointwise/Contrastive × Class/Group Level |
| (Nejadgholi et al., 11 Jul 2025) | Context, Generation & Presentation, Evaluation | Task/Data/Audience/Goal, Model/Input/Interactivity/Output, Content |
| (Hong et al., 28 May 2025, Hong et al., 18 Oct 2025) | Reasoning Type (Text-based/World-knowledge) | Coref/Syntactic/Semantic/Pragmatic/Absence/Logic/Factual/Inferential |
| (Nunes et al., 2020) | General, Content, Presentation | Objective/Target/Generality/Responsiveness/Level, 4 content types |
| (Arya et al., 2019) | What/How/Level (Data/Model, Intrinsic/Post-hoc, Local/Global) | Features, Distributions, Self-explaining, Interpretable, Post-hoc |
| (Iser, 2024) | Selection Criterion in Explanation Generation | Abductive, Contrastive, Minimal, Generality, Anomaly, Probability |
| (Wojtowicz et al., 2020) | Explanatory Virtues in Bayesian Framework | Descriptiveness, Co-explanation, Power, Precision, Unification, Simplicity |
3. Taxonomy Application: Alignment with User and Context
Taxonomy design is fundamental for targeting explanations to diverse audiences and application domains:
- Cognitive Alignment: Employing cognitive-level-based taxonomies (e.g., Bloom’s hierarchy) enables the tailoring of counterfactual explanations such that their factual, rationale-driven, or interactive depth matches user expertise, measurably optimizing user agreement rates (Suffian et al., 2022).
- Audience and Stakeholder Fit: Taxonomies that index explanations by the consumer’s role (e.g., creator, operator, decision subject, examiner) and the desired goal (e.g., debugging, justification, auditability) support effective governance and user trust, as formalized in prompt-based NLE frameworks (Nejadgholi et al., 11 Jul 2025).
- Explanatory Scenario and System Context: Needs-based taxonomies that classify requirements from user app reviews or survey-derived confusions (e.g., system behavior, interaction, domain knowledge, privacy/security, UI) bridge the gap between raw software requirements and technical XAI methods (Unterbusch et al., 2023, Droste et al., 2024, Sadeghi et al., 2021).
- Scientific Discovery and Automated Selection: Selection taxonomies articulate optimization over properties such as sufficiency (abduction), necessity (contrastivity), minimality, generality, anomaly, and plausibility, each corresponding to a formal selection problem implementable via logic-based reasoning (Iser, 2024).
4. Mathematical and Formal Underpinnings
Recent taxonomies increasingly formalize the structure and evaluation of explanations:
- Linear Algebraic Characterization: For explanation methods defined on , local explanations are partitioned by the linear operator used: identity (trace), projection (approximation), or non-idempotent mapping (interpretation). This classification provides a precise correspondence to existing XAI techniques (e.g., decision trees, LIME/SHAP, kernel methods) (Nomm, 2023).
- Probabilistic Reasoning: Bayesian and information-theoretic approaches formalize explanatory virtues—descriptiveness, co-explanation, unification, and simplicity—within the log-posterior of a hypothesis, quantifying the trade-offs (fit, coherence, restraint) and diagnosing pathological explanatory styles (Wojtowicz et al., 2020).
- Reference Frame Granularity in Visual Explanation: The RF×G taxonomy’s formal distinction between pointwise and contrastive, class- vs. group-level visual attributions motivates specialized faithfulness metrics (e.g., CCS, CGC, PGS, CGS), and links directly to semantic interpretation and user intent (Elisha et al., 17 Nov 2025).
- Selection Engine Pseudocode: In automated explanation generation, the choice of taxonomy mapping (e.g., AXp, CXp, generality) is directly translated into (Max)SAT/logic encodings, controlling which explanations are emitted for scientific or regulatory review (Iser, 2024).
5. Evaluation and Best-Practice Integration
Taxonomies facilitate rigorous explanation evaluation, including:
- Layered Evaluation Schema: Multi-part evaluation—content (correctness, completeness, contrastivity), presentation (compactness, composition, confidence), and user-centered (actionability, personalization, coherence)—enables systematic benchmarking in domains like prompt-based NLEs (Nejadgholi et al., 11 Jul 2025).
- Pipeline Mapping: Explanability methods and their corresponding taxonomy categories map to concrete AI pipeline stages: data (feature, distributional explanations), model (self-explaining, interpretable), post-hoc (features, prototypes, surrogates, visualizations) (Arya et al., 2019, Gilpin et al., 2018).
- Human-in-the-Loop Experimentation: Taxonomic scaffolding supports staged feedback and learning utilities, quantifies explanatory generalization and specialization, and proposes actionable metrics (e.g., binary agreement rates, scenario questionnaires) (Suffian et al., 2022).
- Gaps and Open Challenges: Taxonomy-driven analysis reveals underexplored domains (interactive explanations, global visualization, unsupervised distributional explanation), motivating focused research efforts (Arya et al., 2019, Nunes et al., 2020).
6. Comparative Synthesis and Interoperability
- Orthogonality to Traditional Axes: Mathematical taxonomies (e.g., coordinate-operator-based) are orthogonal and complementary to classic distinctions such as global/local scope, model-agnostic/specific, or intrinsic/post-hoc explanations (Nomm, 2023, Gilpin et al., 2018).
- Pluralism and Avoidance of Equivocation: Pluralistic taxonomies clarify ambiguous uses of “explanation” by situating explanation types (diagnostic, explication, expectation, role) along intervention and abstraction axes, each matched with distinct XAI methodologies and evaluative criteria (Yao, 2021).
- Integration Across Modalities and Formulations: Unifying taxonomies enable compatibility among explanation-generation methods (LIME, SHAP, saliency, counterfactual, rule-based), aligning them not only by technical underpinnings but also by user needs, scenario, and evaluative priorities (Arya et al., 2019, Hong et al., 28 May 2025, Suffian et al., 2022).
7. Implications for Research and Practice
The development and deployment of explanation taxonomies are foundational for the principled advancement of XAI and trustworthy software systems. They provide:
- A rigorous vocabulary and conceptual framework for comparing, designing, and auditing explanation methods.
- Actionable mappings from user needs to explanation types, supporting requirement elicitation and regulatory certification.
- A basis for standardized evaluation, empirical benchmarking, and future research targeting identified gaps in explanation diversity, interactivity, and alignment with cognitive processes and social contexts.
Researchers and practitioners are advised to select, adapt, or extend taxonomies according to the specific demands of their domain, audience, and system goals, ensuring fidelity to underlying mathematical, cognitive, and operational characteristics as delineated in current literature (Suffian et al., 2022, Nomm, 2023, Derks et al., 2021, Elisha et al., 17 Nov 2025, Yao, 2021, Iser, 2024).