Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure (1902.08226v2)

Published 20 Feb 2019 in cs.LG, cs.SI, and stat.ML

Abstract: Recent efforts show that neural networks are vulnerable to small but intentional perturbations on input features in visual classification tasks. Due to the additional consideration of connections between examples (\eg articles with citation link tend to be in the same class), graph neural networks could be more sensitive to the perturbations, since the perturbations from connected examples exacerbate the impact on a target example. Adversarial Training (AT), a dynamic regularization technique, can resist the worst-case perturbations on input features and is a promising choice to improve model robustness and generalization. However, existing AT methods focus on standard classification, being less effective when training models on graph since it does not model the impact from connected examples. In this work, we explore adversarial training on graph, aiming to improve the robustness and generalization of models learned on graph. We propose Graph Adversarial Training (GraphAT), which takes the impact from connected examples into account when learning to construct and resist perturbations. We give a general formulation of GraphAT, which can be seen as a dynamic regularization scheme based on the graph structure. To demonstrate the utility of GraphAT, we employ it on a state-of-the-art graph neural network model --- Graph Convolutional Network (GCN). We conduct experiments on two citation graphs (Citeseer and Cora) and a knowledge graph (NELL), verifying the effectiveness of GraphAT which outperforms normal training on GCN by 4.51% in node classification accuracy. Codes are available via: https://github.com/fulifeng/GraphAT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Fuli Feng (143 papers)
  2. Xiangnan He (200 papers)
  3. Jie Tang (302 papers)
  4. Tat-Seng Chua (360 papers)
Citations (211)

Summary

Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure

The paper "Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure" introduces a novel approach to enhance the robustness of graph neural networks (GNNs) against adversarial attacks. The authors propose Graph Adversarial Training (GraphAT), a method that integrates adversarial training with graph-based learning by considering the structural relationships within graph data to improve the robustness and generalization of models trained on graphs.

Core Contributions

The primary contribution of this paper is the development of GraphAT, which dynamically models the effect of perturbations not only on individual nodes but also on nodes connected through the graph structure. This approach addresses a critical vulnerability in existing GNNs, which are susceptible to adversarial attacks due to the propagation and amplification of perturbations across connected nodes.

  • Adversarial Training for Graphs: GraphAT modifies the conventional adversarial training methodology to account for interconnected data points in a graph. By leveraging the structural properties of graphs, GraphAT constructs perturbations that optimize the divergence between target predictions and the predictions of connected examples.
  • Implementation and Efficiency: The paper includes an efficient implementation of GraphAT on Graph Convolutional Networks (GCN), demonstrating its practicality. The computational requirement is kept within feasible limits by employing a linear approximation method for generating adversarial perturbations.
  • Empirical Evaluation: The authors rigorously test GraphAT on two citation networks (Citeseer and Cora) and a knowledge graph (NELL), where it achieves a 4.51% improvement in node classification accuracy over standard training on GCNs. Such empirical validation underscores the utility of GraphAT in enhancing model robustness.

Experimental Highlights

The experimental results reveal that GraphAT significantly improves the robustness of GNNs against adversarial attacks. Noteworthy is its performance improvement on less connected nodes, which are typically more susceptible to perturbations due to their reliance on fewer connections for decision-making. This property makes GraphAT particularly useful for applications involving graphs with a sparse connection structure.

Theoretical and Practical Implications

From a theoretical perspective, GraphAT provides a novel framework for adversarial training in domains characterized by interconnected data points. This work bridges the gap between adversarial robustness and graph-based learning by introducing a method that enforces consistency between node predictions by leveraging the inherent relational structure in graph data.

Practically, the proposed method is applicable to a wide range of domains where graph-structured data is prevalent, including social networks, biological networks, and web-based knowledge systems. The improvement in robustness and accuracy offered by GraphAT can lead to more reliable deployments of GNN-based systems in these fields.

Speculations on Future Work

Future research could explore adapting GraphAT to other GNN architectures beyond GCNs, such as Graph Attention Networks (GATs) or Graph Isomorphism Networks (GINs). Additionally, extending GraphAT to handle dynamic graphs that evolve over time could further enhance its applicability. The integration of GraphAT in an end-to-end machine learning pipeline is another promising direction, potentially leading to more resilient AI systems in adversarial environments.

In conclusion, the introduction of Graph Adversarial Training marks a significant advancement in the robustness of graph neural networks, offering a sophisticated approach to mitigating the impacts of adversarial attacks through the strategic use of graph structure. The work lays a foundation for future explorations into adversary-resilient graph-based learning methodologies.

Github Logo Streamline Icon: https://streamlinehq.com