Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reliable Graph Neural Network Explanations Through Adversarial Training (2106.13427v1)

Published 25 Jun 2021 in cs.LG

Abstract: Graph neural network (GNN) explanations have largely been facilitated through post-hoc introspection. While this has been deemed successful, many post-hoc explanation methods have been shown to fail in capturing a model's learned representation. Due to this problem, it is worthwhile to consider how one might train a model so that it is more amenable to post-hoc analysis. Given the success of adversarial training in the computer vision domain to train models with more reliable representations, we propose a similar training paradigm for GNNs and analyze the respective impact on a model's explanations. In instances without ground truth labels, we also determine how well an explanation method is utilizing a model's learned representation through a new metric and demonstrate adversarial training can help better extract domain-relevant insights in chemistry.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Donald Loveland (18 papers)
  2. Shusen Liu (29 papers)
  3. Bhavya Kailkhura (108 papers)
  4. Anna Hiszpanski (3 papers)
  5. Yong Han (28 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.