Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Attack and Defense on Graph Data: A Survey (1812.10528v4)

Published 26 Dec 2018 in cs.CR, cs.AI, and cs.SI

Abstract: Deep neural networks (DNNs) have been widely applied to various applications, including image classification, text generation, audio recognition, and graph data analysis. However, recent studies have shown that DNNs are vulnerable to adversarial attacks. Though there are several works about adversarial attack and defense strategies on domains such as images and natural language processing, it is still difficult to directly transfer the learned knowledge to graph data due to its representation structure. Given the importance of graph analysis, an increasing number of studies over the past few years have attempted to analyze the robustness of machine learning models on graph data. Nevertheless, existing research considering adversarial behaviors on graph data often focuses on specific types of attacks with certain assumptions. In addition, each work proposes its own mathematical formulation, which makes the comparison among different methods difficult. Therefore, this review is intended to provide an overall landscape of more than 100 papers on adversarial attack and defense strategies for graph data, and establish a unified formulation encompassing most graph adversarial learning models. Moreover, we also compare different graph attacks and defenses along with their contributions and limitations, as well as summarize the evaluation metrics, datasets and future trends. We hope this survey can help fill the gap in the literature and facilitate further development of this promising new field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Lichao Sun (186 papers)
  2. Yingtong Dou (19 papers)
  3. Carl Yang (130 papers)
  4. Ji Wang (210 papers)
  5. Yixin Liu (108 papers)
  6. Philip S. Yu (592 papers)
  7. Lifang He (98 papers)
  8. Bo Li (1107 papers)
Citations (243)

Summary

Comprehensive Analysis of Adversarial Attack and Defense on Graph Data

The paper, "Adversarial Attack and Defense on Graph Data: A Survey -- Supplemental Materials," offers an extensive examination of the current landscape in graph adversarial learning, focusing on both attack and defense mechanisms within this domain. This analysis is positioned as a foundational effort toward structuring the complex interplay of algorithms and methodologies prevalent in adversarial learning on graph data. The authors have curated a wide array of approaches, accompanied by open-source implementations, to serve as a resource for researchers pursuing advancements in this field.

Summary of Contributions

The primary contribution of the paper is a thorough classification of adversarial attacks and defenses applicable to graph data. The paper delineates various methodologies into organized taxonomies, making it easier for researchers to navigate the intricate network of available techniques. It amasses an extensive collection of open-source resources, facilitating future implementations and fostering an accessible environment for deeper exploration. A particular highlight is the Graph Robustness Benchmark (GRB), proposed as a standardized framework to evaluate adversarial robustness in node classification tasks.

Noteworthy Numerical Results

The paper includes a noteworthy compilation of algorithms grouped into categories such as Graph Attack, Graph Defense, Other Baseline, and Benchmark approaches. The open-source implementations listed provide practical insights into the real-world applications and effectiveness of these approaches. Tools such as the GRB are anticipated to serve as significant measures for assessing algorithmic robustness and creating a consistent basis for comparison across studies.

Implications of Research

Practical Implications

The structured taxonomy developed aids practitioners in identifying appropriate attack or defense strategies against specific adversarial threats in graphical data structures. The open-source catalog facilitates ease of integration and further application of these techniques in real-world contexts, such as social network analysis or cybersecurity.

Theoretical Implications

The unified formulation for adversarial learning proposed by the authors establishes a theoretical foundation that can be leveraged to develop more sophisticated models. The paper provides a detailed summary of existing metrics and suggests potential areas for enhancing these techniques, setting a framework for future theoretical exploration.

Speculation on Future Developments in AI

As AI continues to progress, the intricacies of adversarial learning and robustness will become increasingly relevant. Future research may prioritize developing methods that not only perform efficiently in controlled environments but also exhibit robustness in dynamic, real-world settings. Further advancements could explore novel perturbation strategies or refine existing metrics for better assessing the subtle vulnerabilities of graph data structures.

Conclusion

In conclusion, this survey delivers a comprehensive and methodical exploration of adversarial attacks and defense mechanisms within graph data. By compiling and categorizing a diverse array of existing works, the paper lays a robust groundwork for future research and innovation in this field. The methodologies highlighted will undoubtedly inform subsequent investigations, contributing to the evolving understanding of adversarial learning and robustness in graph-based models. The anticipated future directions offer a promising outlook for the integration of strengthened security measures within AI systems employing graph structures.