Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Counterfactual Explanations on Graph Neural Networks (2107.04086v3)

Published 8 Jul 2021 in cs.LG and cs.AI

Abstract: Massive deployment of Graph Neural Networks (GNNs) in high-stake applications generates a strong demand for explanations that are robust to noise and align well with human intuition. Most existing methods generate explanations by identifying a subgraph of an input graph that has a strong correlation with the prediction. These explanations are not robust to noise because independently optimizing the correlation for a single input can easily overfit noise. Moreover, they do not align well with human intuition because removing an identified subgraph from an input graph does not necessarily change the prediction result. In this paper, we propose a novel method to generate robust counterfactual explanations on GNNs by explicitly modelling the common decision logic of GNNs on similar input graphs. Our explanations are naturally robust to noise because they are produced from the common decision boundaries of a GNN that govern the predictions of many similar input graphs. The explanations also align well with human intuition because removing the set of edges identified by an explanation from the input graph changes the prediction significantly. Exhaustive experiments on many public datasets demonstrate the superior performance of our method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Mohit Bajaj (3 papers)
  2. Lingyang Chu (21 papers)
  3. Zi Yu Xue (4 papers)
  4. Jian Pei (104 papers)
  5. Lanjun Wang (36 papers)
  6. Peter Cho-Ho Lam (4 papers)
  7. Yong Zhang (660 papers)
Citations (85)

Summary

We haven't generated a summary for this paper yet.