Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks (2006.08149v3)

Published 15 Jun 2020 in cs.LG and stat.ML

Abstract: Deep learning methods for graphs achieve remarkable performance across a variety of domains. However, recent findings indicate that small, unnoticeable perturbations of graph structure can catastrophically reduce performance of even the strongest and most popular Graph Neural Networks (GNNs). Here, we develop GNNGuard, a general algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure. GNNGuard can be straight-forwardly incorporated into any GNN. Its core principle is to detect and quantify the relationship between the graph structure and node features, if one exists, and then exploit that relationship to mitigate negative effects of the attack.GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes. The revised edges allow for robust propagation of neural messages in the underlying GNN. GNNGuard introduces two novel components, the neighbor importance estimation, and the layer-wise graph memory, and we show empirically that both components are necessary for a successful defense. Across five GNNs, three defense methods, and five datasets,including a challenging human disease graph, experiments show that GNNGuard outperforms existing defense approaches by 15.3% on average. Remarkably, GNNGuard can effectively restore state-of-the-art performance of GNNs in the face of various adversarial attacks, including targeted and non-targeted attacks, and can defend against attacks on heterophily graphs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Xiang Zhang (395 papers)
  2. Marinka Zitnik (79 papers)
Citations (261)

Summary

Overview of GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

The paper, "GNNGuard: Defending Graph Neural Networks against Adversarial Attacks," addresses a critical vulnerability in the field of graph machine learning: the susceptibility of Graph Neural Networks (GNNs) to adversarial attacks. This research introduces GNNGuard, a method that enhances the robustness of GNNs against attacks that perturb the graph structure during training. The authors thoroughly investigate and empirically validate their approach across various datasets and GNN models, demonstrating its efficacy in preserving the accuracy of GNNs under adversarial conditions.

Defense Mechanism and Components

GNNGuard integrates a two-pronged approach comprising neighbor importance estimation and layer-wise graph memory. The neighbor importance estimation component aims to detect and suppress the influence of adversarially perturbed edges. By leveraging network theory of homophily, GNNGuard assigns higher weights to edges that connect similar nodes while diminishing those between dissimilar nodes. This dynamic adjustment is facilitated by the calculation of importance weights based on node feature similarity, ensuring effective propagation of genuine neural messages.

The layer-wise graph memory component stabilizes the altered graph structure across GNN layers. It retains partial memory of the edge importance from previous layers, providing a smoother update of node representations and guarding against abrupt changes in connections due to adversarial triggers. This strategic use of memory coefficients helps maintain consistent defense across layers, further strengthening GNN resilience.

Empirical Validation

The authors conduct extensive empirical evaluations using five GNN architectures—GCN, GAT, GIN, JK-Net, and GraphSAINT—across four datasets including Cora, Citeseer, ogbn-arxiv, and a human disease graph. The experiments encompass three types of adversarial attacks: direct targeted, influence targeted, and non-targeted, focusing on the perturbation of graph structure. GNNGuard consistently outperforms current state-of-the-art defense techniques such as GNN-Jaccard, RobustGCN, and GNN-SVD by an average improvement margin of 15.3%. Notably, GNNGuard restores the performance of GNNs to levels comparable to non-attacked conditions, highlighting its robust defense capabilities.

Implications and Future Directions

The implications of this work are significant for both practical applications and theoretical advancements. Practically, GNNGuard's ability to defend GNNs without extensive modifications to existing architectures makes it a versatile tool for enhancing model robustness in domains such as bioinformatics, social networks, and cybersecurity. Theoretically, the paper provides insights into the interaction between adversarial resilience and graph-based learning, potentially inspiring future work on robustness certification and adaptive defense mechanisms in graph neural networks.

Furthermore, the adaptability of GNNGuard to both homophily and heterophily graphs expands its utility beyond traditional network settings, making it applicable to a wide array of graph-structured data problems. The authors' decision to open-source the code and datasets encourages further exploration and adoption within the research community, fostering continued innovation in safeguarding machine learning models against adversarial mischief.

Conclusion

GNNGuard represents a significant contribution to the field of graph neural networks, providing a tangible solution to the prevalent challenge of adversarial attacks. By integrating adaptive defense strategies directly into the GNN framework, the authors present a robust mechanism that not only counters existing vulnerabilities but also sets a foundation for further advancements in secure graph learning. This work underscores the importance of continuously evolving defense mechanisms to align with the growing capabilities and complexities within the domain of artificial intelligence.