Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks (2010.05788v1)

Published 12 Oct 2020 in cs.LG

Abstract: In Graph Neural Networks (GNNs), the graph structure is incorporated into the learning of node representations. This complex structure makes explaining GNNs' predictions become much more challenging. In this paper, we propose PGM-Explainer, a Probabilistic Graphical Model (PGM) model-agnostic explainer for GNNs. Given a prediction to be explained, PGM-Explainer identifies crucial graph components and generates an explanation in form of a PGM approximating that prediction. Different from existing explainers for GNNs where the explanations are drawn from a set of linear functions of explained features, PGM-Explainer is able to demonstrate the dependencies of explained features in form of conditional probabilities. Our theoretical analysis shows that the PGM generated by PGM-Explainer includes the Markov-blanket of the target prediction, i.e. including all its statistical information. We also show that the explanation returned by PGM-Explainer contains the same set of independence statements in the perfect map. Our experiments on both synthetic and real-world datasets show that PGM-Explainer achieves better performance than existing explainers in many benchmark tasks.

Citations (288)

Summary

  • The paper introduces a model-agnostic explainer that uses Bayesian networks to capture non-linear feature dependencies in GNN predictions.
  • It employs data perturbation, variable selection, and structure learning with BIC scoring to approximate GNN behavior accurately.
  • Experimental results on synthetic and real-world datasets validate its superior performance, enhancing interpretability and trust in AI models.

Overview of PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks

Graph Neural Networks (GNNs) have risen to prominence due to their capacity to handle data structured as graphs across diverse areas such as social networks, knowledge graphs, and biological networks. Despite these advancements, the interpretability of GNNs remains a pressing concern. Addressing this, the paper introduces PGM-Explainer, a model-agnostic explainer that leverages Probabilistic Graphical Models (PGMs) to provide explanations for GNN predictions. The innovation here lies in the ability of PGM-Explainer to encapsulate the dependencies of features using conditional probabilities, unlike conventional explainers based on linear assumptions.

Methodology

The primary approach of the PGM-Explainer is to approximate GNN predictions with a Bayesian network that serves as an interpretable model. This method diverges from traditional additive feature attribution approaches by not relying on linear independence among features, instead embracing the inherent non-linear dependencies within GNNs.

The process of PGM-Explainer comprises three main steps:

  1. Data Generation: It involves perturbing the features of the input graph to create a dataset of input-output pairs that capture the GNN's predictive behavior from around the input space.
  2. Variables Selection: Important features are selected based on their contribution to the prediction, striving for a compact yet informative Markov-blanket that fully describes the statistical information related to the target prediction.
  3. Structure Learning: This step constructs the Bayesian network from the filtered data, using the BIC score for optimal structure determination. The algorithm can incorporate a constraint ensuring that the target variable has no children, optimizing inference efficiency and the clarity of explanations.

Experimental Results

Experiments conducted on both synthetic and real-world datasets demonstrate that PGM-Explainer delivers accurate and intuitive explanations. In synthetic datasets designed to evaluate the capability of explainers to capture complex interactions, PGM-Explainer shows superior performance due to its ability to manage non-linear dependencies.

For real-world datasets, especially those like Bitcoin-Alpha and Bitcoin-OTC networks, PGM-Explainer significantly outperformed other explainers like GNNExplainer and gradient-based methods in metrics such as precision. In human-subjective tests with the MNIST SuperPixel-Graph dataset, the explanations generated by PGM-Explainer were more favored, indicating their effectiveness in reflecting the underlying reasons for GNN predictions.

Implications and Future Directions

PGM-Explainer advances the interpretability of GNNs by providing deeper insights into their decision-making processes, addressing concerns of fairness, privacy, and safety in deployment. Future research may expand this work by exploring other forms of PGMs like Markov networks or Dependency networks as interpretable models. Moreover, further analyses on alternative objective functions and structure learning methods can refine the balance between explanation accuracy and computational efficiency.

PGM-Explainer sets the stage for a more rigorous science of interpretable machine learning models, essential for broader acceptance and trust in AI systems.

Github Logo Streamline Icon: https://streamlinehq.com