Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Structure Learning for Robust Graph Neural Networks (2005.10203v3)

Published 20 May 2020 in cs.LG, cs.CR, cs.SI, and stat.ML

Abstract: Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs. However, recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks. Adversarial attacks can easily fool GNNs in making predictions for downstream tasks. The vulnerability to adversarial attacks has raised increasing concerns for applying GNNs in safety-critical applications. Therefore, developing robust algorithms to defend adversarial attacks is of great significance. A natural idea to defend adversarial attacks is to clean the perturbed graph. It is evident that real-world graphs share some intrinsic properties. For example, many real-world graphs are low-rank and sparse, and the features of two adjacent nodes tend to be similar. In fact, we find that adversarial attacks are likely to violate these graph properties. Therefore, in this paper, we explore these properties to defend adversarial attacks on graphs. In particular, we propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model from the perturbed graph guided by these properties. Extensive experiments on real-world graphs demonstrate that the proposed framework achieves significantly better performance compared with the state-of-the-art defense methods, even when the graph is heavily perturbed. We release the implementation of Pro-GNN to our DeepRobust repository for adversarial attacks and defenses (footnote: https://github.com/DSE-MSU/DeepRobust). The specific experimental settings to reproduce our results can be found in https://github.com/ChandlerBang/Pro-GNN.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wei Jin (84 papers)
  2. Yao Ma (149 papers)
  3. Xiaorui Liu (50 papers)
  4. Xianfeng Tang (62 papers)
  5. Suhang Wang (118 papers)
  6. Jiliang Tang (204 papers)
Citations (628)

Summary

Graph Structure Learning for Robust Graph Neural Networks: A Technical Overview

The paper "Graph Structure Learning for Robust Graph Neural Networks" introduces a framework named Pro-GNN designed to enhance the robustness of Graph Neural Networks (GNNs) against adversarial attacks. The authors identify and exploit intrinsic properties of real-world graphs to defend against such vulnerabilities.

Vulnerabilities in GNNs

GNNs have shown significant promise in representation learning across various domains. However, recent studies indicate their susceptibility to adversarial attacks, which subtly perturb graph structures or node features to disrupt model predictions. This vulnerability is concerning for applications in safety-critical domains.

Proposed Framework: Pro-GNN

Pro-GNN aims to simultaneously learn an optimal graph structure and the parameters of a robust GNN model. The framework capitalizes on three fundamental properties of graphs:

  1. Low Rank: Real-world graphs tend to have low-rank structures, indicating a reduced number of underlying factors driving connections.
  2. Sparsity: Many real-world graphs are sparse, with nodes connected only to a few neighbors.
  3. Feature Smoothness: Nodes that are connected typically have similar features.

The framework utilizes these properties to clean adversarially perturbed graphs, restoring their intended structure and information.

Methodology

Pro-GNN combines a regularization-based graph structure learning approach with GNN optimization, formulated through the following key steps:

  1. Graph Reconstruction: It reconstructs the graph to retain its low-rank and sparse nature. This involves minimizing the Frobenius norm between the original and reconstructed graph, subject to low-rank and sparsity constraints.
  2. Feature Smoothness Integration: Incorporates feature smoothness by penalizing large feature differences between connected nodes, ensuring that connected nodes remain similar in the learned graph.
  3. Joint Learning: Utilizes an alternating optimization scheme, iteratively updating the adjacency matrix and GNN parameters to enhance model robustness.

Experimental Evaluation

The proposed framework was extensively evaluated across multiple standard datasets and showed superior performance in accuracy compared to state-of-the-art defense mechanisms against various types of attacks (targeted, non-targeted, and random). Even under high perturbation rates, Pro-GNN maintained higher classification accuracy, proving its efficacy in reconstructing useful graph structure and preserving model performance.

Implications and Future Directions

The paper presents a comprehensive approach to addressing the adversarial vulnerabilities of GNNs. By focusing on inherent graph properties, Pro-GNN offers a pathway towards developing robust learning frameworks in volatile data environments. Future directions may explore additional properties to further enhance robustness and scalability across wider datasets and attack vectors.

Conclusion

The paper presents an effective framework for improving the robustness of GNNs against adversarial attacks. By leveraging core graph properties, Pro-GNN not only defends against perturbations but also ensures the recovery of meaningful graph structures necessary for downstream analytical tasks. This work sets a foundational approach for future research in developing resilient machine learning models over graph data.