Probability of Sufficiency in Concept Interventions
- Probability of sufficiency is defined within structural causal models to measure the likelihood that altering a concept alone triggers a change in model output.
- PS is computed via counterfactual simulation or Monte Carlo methods, applied for both local instances and global population-level explanations.
- This metric filters spurious associations and supports actionable recourse by providing causal, high-fidelity explanations in concept-based XAI.
Probability of sufficiency (PS) of concept interventions is a formal, post-hoc causal metric central to contemporary concept-based interpretability and explainability frameworks in machine learning. Defined within the structural causal model (SCM) paradigm, PS quantifies, for a given concept intervention, the likelihood that fixing a concept variable to a different value alone would suffice to force a change in the model’s output. This metric is foundational for assigning causally-valid, human-interpretable attributions to high-level concepts in black-box models, bridging the gap between algorithmic opacity and actionable, personalized explanations.
1. Formal Definition and Theoretical Foundations
The probability of sufficiency, introduced by Pearl, is generalized in causal concept-based XAI to assess the causal power of concept-level variables, not just low-level inputs. Given a black-box prediction function (e.g., a deep classifier) and a compact, human-meaningful set of concepts , the SCM embeds in a probabilistically coherent causal graph: . Here, are exogenous sources, the semantic concepts, a generative or editing map, and the classifier.
A concept intervention is formally a do-operation, e.g., , forcibly setting one or more concepts to a new value and severing their incoming causal dependencies. The probability of sufficiency for an intervention changing output from to is defined as
where encode concepts and nuisance latents for an observed and (Bjøru et al., 2 Dec 2025). This answers: “Given the factual instance, what is the probability that flipping to alone would suffice to change the model’s decision to ?”
2. Computation and Algorithms for Probability of Sufficiency
When and are deterministic, and the SCM is Markovian, PS can be computed for both local (instance-wise) and global (population) explanations using counterfactual simulation or Monte Carlo over exogenous sources . For a given intervention,
where is the output of the SCM with fixed to , and is the indicator function (Bjøru et al., 2 Dec 2025).
Local explanations fix the instance of interest and estimate PS for all candidate concept interventions using the above mechanism. Global explanations compute PS under population-level interventions:
- averages over the marginal distribution of all .
- Conditional (subgroup) PS computes average local PS for all instances with , .
Semi-synthetic or learned generative models (e.g., StarGAN for facial images (Bjøru et al., 2 Dec 2025)) provide the requisite editing maps and plausible in-distribution samples.
3. Concept Interventions, Difference-Making, and Causal Attribution
Probability of sufficiency operationalizes the notion of “difference-making”: a concept-level variable is a sufficient cause of a prediction change iff its PS is large for some . This instantiates the interventionist theory of explanation (Sani et al., 2020): only variables for which is non-negligible are admitted as causal attributors; variables associated with the output merely via confounding or through proxy correlations (as detected by partial ancestral graphs, FCI, or similar causal discovery tools) will typically display low PS and are filtered out (Sani et al., 2020, Bjøru et al., 2 Dec 2025).
This approach sharply contrasts with associational feature attribution metrics (e.g., LIME, SHAP), which do not distinguish direct cause from statistical association. Notably, sufficiency and necessity probabilities, as framed in the LEWIS system (Galhotra et al., 2021), allow for instance-specific and context-aware evaluations of causal responsibility, producing actionable, counterfactually-grounded explanations and recourse.
4. Evaluation Protocols and Empirical Examples
Empirical studies use PS both as an explanation metric and a validation criterion. In classification tasks (e.g., CelebA faces), causal PS identifies which single attribute flips (e.g., “do(GrayHair=1)”) most strongly increase the probability of flipping the predicted class (e.g., Young Old), sometimes quantifying this as, e.g., (Bjøru et al., 2 Dec 2025). PS can be computed not just for singleton interventions, but for multi-concept interventions as well.
A summary of explanation types and PS-based metrics:
| Explanation regime | Conditioning | PS formula |
|---|---|---|
| Local | fixed | |
| Global (marginal) | none | |
| Subgroup |
PS is used to rank candidate interventions, yielding an interpretable hierarchy of “most causally potent” concepts for a target class or subgroup.
Experiments on tabular and image data confirm the high interpretive value of PS explanations: top-PS concept flips correspond to manipulations that produce large, targeted shifts in model output, aligning with domain intuition and outperforming gradient-based concept attribution or association-rule mining in faithfulness and causal verification (Bjøru et al., 2 Dec 2025, Xu et al., 2020, Moreira et al., 2024).
5. Assumptions and Limitations
The causal faithfulness and interpretability of PS-based explanations hinge on several key assumptions:
- Completeness of concept vocabulary: All high-level causes for are included in . Omitted causes are relegated to nuisance latents , assumed independent of to avoid spurious causal attributions (Bjøru et al., 2 Dec 2025).
- SCM correctness: The causal graph and concept structural equations reflect the actual data-generating process. Incomplete knowledge can be addressed by partially specified SCMs, reporting PS intervals that upper and lower bound the true causal effect (Bjøru et al., 2 Dec 2025).
- Valid editing/intervention mechanism: The concept-to-input decoder must yield in-distribution, semantically coherent counterfactuals. In practice, this may require high-quality generative models (e.g., StarGAN) (Bjøru et al., 2 Dec 2025).
- No access to black-box internals: PS explanations are entirely post-hoc and query-based—fidelity is guaranteed as each concept intervention is carried forward through the unchanged black-box .
A significant limitation is the need for a comprehensive concept basis and an accurate SCM. Failure to satisfy these leads to indeterminacy or ambiguous PS intervals.
6. Broader Connections and Practical Implications
Probability of sufficiency for concept interventions underpins principled, faithfulness-guaranteed causal concept-based XAI. It directly informs actionable recourse, personalized model (counter)factual audit, and debugging:
- Recourse and actionable feedback: By quantifying the sufficiency of specific concept changes to flip a model’s verdict, PS enables the identification of minimal, targeted interventions for desired outcomes (Galhotra et al., 2021).
- Contrast with correlation-based explanations: PS explanations explicitly filter out spurious associations, only admitting interventions with genuine causal power. Empirical findings demonstrate that PS explanations deliver stable, actionable, and human-meaningful attributions, unlike LIME/SHAP (Bjøru et al., 2 Dec 2025, Galhotra et al., 2021).
- Alignment and interpretability: When aligned with human-interpretable vocabularies and structural conditions (no concept mixing, monotonicity), PS explanations satisfy rigorous criteria for transparency and stakeholder communication (Marconato et al., 2023).
7. Future Directions
Recent research calls for extensions to multi-class and continuous concepts, partially specified or learned SCMs (using e.g., NO-TEARS structure penalties), automation of concept definition via LLMs, and integration of latent-variable concept representations. Computational efficiency for PS computation and relaxation of the expert-defined DAG requirement are ongoing themes (Moreira et al., 2024, Bjøru et al., 2 Dec 2025).
In summary, the probability of sufficiency of concept interventions is the foundational quantitative metric for causal, concept-based model explanations, supporting actionable, high-fidelity, human-interpretable XAI within the SCM framework (Bjøru et al., 2 Dec 2025, Galhotra et al., 2021, Moreira et al., 2024, Sani et al., 2020, Marconato et al., 2023, Xu et al., 2020).