Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies (2003.00653v3)

Published 2 Mar 2020 in cs.LG, cs.CR, and stat.ML

Abstract: Deep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks. As the extensions of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability. Adversary can mislead GNNs to give wrong predictions by modifying the graph structure such as manipulating a few edges. This vulnerability has arisen tremendous concerns for adapting GNNs in safety-critical applications and has attracted increasing research attention in recent years. Thus, it is necessary and timely to provide a comprehensive overview of existing graph adversarial attacks and the countermeasures. In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods. Furthermore, we have developed a repository with representative algorithms (https://github.com/DSE-MSU/DeepRobust/tree/master/deeprobust/graph). The repository enables us to conduct empirical studies to deepen our understandings on attacks and defenses on graphs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Wei Jin (84 papers)
  2. Yaxin Li (27 papers)
  3. Han Xu (92 papers)
  4. Yiqi Wang (39 papers)
  5. Shuiwang Ji (122 papers)
  6. Charu Aggarwal (38 papers)
  7. Jiliang Tang (204 papers)
Citations (99)

Summary

Overview of "Adversarial Attacks and Defenses on Graphs: A Review, A Tool, and Empirical Studies"

The paper presents a comprehensive review of adversarial attacks and defenses in the context of Graph Neural Networks (GNNs). GNNs, as extensions of deep neural networks to graph-structured data, are significantly effective in capturing complex relational information in various domains such as social networks, biological networks, and more. However, their vulnerability to adversarial attacks raises concerns about their application in security-critical environments, necessitating a thorough understanding of both the attacks and potential defense mechanisms.

Adversarial Attacks on Graphs

GNNs are susceptible to adversarial attacks, where small perturbations can lead to significant misclassifications. The paper categorizes these attacks based on several factors including:

  1. Attacker's Capacity: Distinguishing between evasion attacks (test-time perturbations) and poisoning attacks (training-time manipulations).
  2. Perturbation Type: Including modifications to node features, edges, and even the injection of fake nodes.
  3. Attacker's Goal: Highlighting targeted and untargeted attacks, where the former focus on misleading specific nodes or graphs, and the latter aim at overall model degradation.
  4. Attacker's Knowledge: Categorizing attacks into white-box, gray-box, and black-box, depending on how much the attacker knows about the victim model.
  5. Victim Models: Identifying some of the prominent algorithms vulnerable to attacks, such as GCN, GAT, and others.

The review aggregates a variety of attack strategies, demonstrating the diverse ways GNNs can be compromised, and highlights the need for robust defense mechanisms.

Countermeasures Against Attacks

The paper also covers the development and categorization of defense strategies:

  1. Adversarial Training: Implementing robust training protocols by incorporating adversarial examples during model training to improve resilience.
  2. Adversarial Perturbation Detection: Methods for identifying and filtering out adversarial perturbations from the input data.
  3. Certifiable Robustness: Techniques providing formal guarantees on the safety of GNN predictions against certain perturbations.
  4. Graph Purification: Approaches focusing on cleaning the graph data before feeding it to GNNs, aiming to remove suspicious patterns.
  5. Attention Mechanism: Leveraging attention-based strategies to downweight potential adversarial nodes or edges during the learning process.

Empirical Studies and Repository

A significant contribution of the paper is the development of a repository for graph adversarial attacks and defenses, DeepRobust. This toolkit facilitates the empirical evaluation of various attack and defense strategies, allowing researchers to quantitatively assess their effectiveness under different scenarios. By providing implementations of multiple attack algorithms, alongside robust defense strategies and benchmark datasets, the repository supports a better understanding and development of resilient GNN models.

Implications and Future Directions

The research implications are profound, particularly for applications in safety-critical domains like financial fraud detection, healthcare, and autonomous systems. The vulnerabilities uncovered necessitate continuous advancements in both attack strategies (to better understand potential vectors) and defense mechanisms (to fortify models).

Speculatively, future research directions could focus on developing more computationally efficient defense strategies, quantifying the imperceptibility of attacks more robustly, and exploring the transferability of adversarial examples across diverse graph domains. Scalability also remains a pressing challenge given the size and complexity of real-world graphs. Addressing these aspects will be crucial for the safe adoption of GNNs in various applications.

The work provides a solid foundation and a call to action for the research community to prioritize robustness in graph-based machine learning models.

Github Logo Streamline Icon: https://streamlinehq.com