Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Examples on Graph Data: Deep Insights into Attack and Defense (1903.01610v3)

Published 5 Mar 2019 in cs.LG, cs.CR, and stat.ML

Abstract: Graph deep learning models, such as graph convolutional networks (GCN) achieve remarkable performance for tasks on graph data. Similar to other types of deep models, graph deep learning models often suffer from adversarial attacks. However, compared with non-graph data, the discrete features, graph connections and different definitions of imperceptible perturbations bring unique challenges and opportunities for the adversarial attacks and defenses for graph data. In this paper, we propose both attack and defense techniques. For attack, we show that the discreteness problem could easily be resolved by introducing integrated gradients which could accurately reflect the effect of perturbing certain features or edges while still benefiting from the parallel computations. For defense, we observe that the adversarially manipulated graph for the targeted attack differs from normal graphs statistically. Based on this observation, we propose a defense approach which inspects the graph and recovers the potential adversarial perturbations. Our experiments on a number of datasets show the effectiveness of the proposed methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Huijun Wu (12 papers)
  2. Chen Wang (600 papers)
  3. Yuriy Tyshetskiy (8 papers)
  4. Andrew Docherty (2 papers)
  5. Kai Lu (35 papers)
  6. Liming Zhu (101 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.