Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Global Explainability of GNNs via Logic Combination of Learned Concepts (2210.07147v3)

Published 13 Oct 2022 in cs.LG, cs.AI, and cs.LO

Abstract: While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work, we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer provides accurate and human-interpretable global explanations that are perfectly aligned with ground-truth explanations (on synthetic data) or match existing domain knowledge (on real-world data). Extracted formulas are faithful to the model predictions, to the point of providing insights into some occasionally incorrect rules learned by the model, making GLGExplainer a promising diagnostic tool for learned GNNs.

Citations (47)

Summary

  • The paper introduces GLGExplainer, which transforms local GNN explanations into global Boolean logic rules.
  • It clusters local subgraph interpretations into prototypical concepts to reveal high-level decision patterns.
  • Evaluations on synthetic and real datasets demonstrate high fidelity in capturing GNN behavior and identifying key features like nitro groups.

Global Explainability of GNNs via Logic Combination of Learned Concepts

The research presented in the paper addresses a significant challenge in graph neural networks (GNNs): providing global explanations for model behavior. While local explanation methods have been extensively developed, they often fail to encapsulate the global patterns and interactions that a GNN learns from its input data. This paper introduces GLGExplainer, a novel approach that constructs global explanations for GNNs using Boolean combinations of learned graphical concepts.

GLGExplainer operates by first extracting local explanations from a GNN's predictions using existing local explanation methods, like PGExplainer. These local explanations, which are typically subgraphs of the input data, are embedded into a fixed-size representation. The embeddings serve as a basis for identifying prototypical examples, which are then grouped into clusters representing high-level concepts. These clusters are crucial as they provide interpretable units for global explanations.

A unique aspect of GLGExplainer is its use of a differentiable architecture that combines these concepts into logic formulas. These formulas aim to mimic the GNN's decision-making process by logically merging the identified concepts. The approach leverages entropy-based logic explained networks (E-LENs) to distill these explanations into human-understandable rules expressed in Boolean logic. This setup enables GLGExplainer not only to offer insights into the model's decision structures but also to identify and potentially correct incorrect rules learned by the model.

The experimental evaluation of GLGExplainer includes synthetic and real-world datasets, such as BAMultiShapes, Mutagenicity, and the Hospital Interaction Network (HIN). Notably, GLGExplainer demonstrated high fidelity and accuracy in capturing the GNNs' behavior across these datasets. For instance, it successfully identified common mutagenicity indicators such as nitro groups in the Mutagenicity dataset. On BAMultiShapes, GLGExplainer effectively reconstructed the underlying logic of the dataset, albeit highlighting discrepancies between the model and ground-truth labeling.

Overall, GLGExplainer marks a significant step towards enhancing the interpretability of GNNs through global explanations. By transforming local explanations into coherent global insights, it offers substantial utility for debugging and better understanding complex models. Future research in this area could focus on extending GLGExplainer to various types of neural networks and exploring its applications in domains where understanding model logic is imperative for deployment. Additionally, improvements might include refining the concept clustering mechanism or expanding the approach to handle dynamic or evolving graphs. Such advances would further align model predictions with human domains of knowledge, enhancing trust and accountability in AI systems.

Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com