Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Neural Network Explanations are Fragile (2406.03193v1)

Published 5 Jun 2024 in cs.CR and cs.LG

Abstract: Explainable Graph Neural Network (GNN) has emerged recently to foster the trust of using GNNs. Existing GNN explainers are developed from various perspectives to enhance the explanation performance. We take the first step to study GNN explainers under adversarial attack--We found that an adversary slightly perturbing graph structure can ensure GNN model makes correct predictions, but the GNN explainer yields a drastically different explanation on the perturbed graph. Specifically, we first formulate the attack problem under a practical threat model (i.e., the adversary has limited knowledge about the GNN explainer and a restricted perturbation budget). We then design two methods (i.e., one is loss-based and the other is deduction-based) to realize the attack. We evaluate our attacks on various GNN explainers and the results show these explainers are fragile.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jiate Li (5 papers)
  2. Meng Pang (27 papers)
  3. Yun Dong (7 papers)
  4. Jinyuan Jia (69 papers)
  5. Binghui Wang (58 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.