Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Learning on Graphs: A Survey (1812.04202v3)

Published 11 Dec 2018 in cs.LG, cs.SI, and stat.ML

Abstract: Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.

Deep Learning on Graphs: A Survey

The paper "Deep Learning on Graphs: A Survey" by Ziwei Zhang, Peng Cui, and Wenwu Zhu provides a comprehensive and systematic overview of the diverse methods and architectures applied to graphs using deep learning. Given the complexity and unique characteristics of graph-structured data, the authors categorize the deep learning methods on graphs into five principal types: graph recurrent neural networks (Graph RNNs), graph convolutional networks (GCNs), graph autoencoders (GAEs), graph reinforcement learning (Graph RL), and graph adversarial methods. This classification is based on the model architectures and training strategies employed.

Graph Recurrent Neural Networks

Graph RNNs are used to capture recursive and sequential patterns in graphs and are subdivided into node-level and graph-level RNNs:

  • Node-Level RNNs: Methods like GNNs, GGS-NNs, and SSE focus on encoding graph structural information by defining recursive state updates for each node.
  • Graph-Level RNNs: These methods, such as those used in autoregressive graph generation models and dynamic graph neural networks, apply RNNs to the entire graph, capturing temporal dynamics and graph generation processes.

Graph Convolutional Networks

GCNs have become a cornerstone of graph-based deep learning due to their capability to learn from graph structures and node features via convolution operations:

  • Spectral Methods: These methods, including early works like those of Bruna et al., utilize graph Fourier transforms and spectral convolutions. However, their computational inefficiency and lack of generalization across different graph structures necessitate more scalable approaches.
  • Spatial Methods: Spatial convolutions aggregate information from node neighborhoods. Notable methods, such as ChebNet and the GCN proposed by Kipf and Welling, focus on localized filter operations to enhance scalability.
  • Frameworks and Innovations: Unified frameworks like MPNNs and GraphSAGE, along with enhancement techniques such as attention mechanisms (e.g., GAT, GaAN), residual/jumping connections, and efficient sampling methods, mark significant progress in the domain of GCNs.

Graph Autoencoders

GAEs leverage the intrinsic low-rank structures of graphs for unsupervised learning tasks, representing nodes as embeddings:

  • Standard Autoencoders: Examples include SAE, SDNE, and DNGR, differing mainly in their loss functions and the graph features they reconstruct.
  • Variational Autoencoders: VGAE and DVNE introduce probabilistic graphical models, using encoders and decoders tailored to graph data and incorporating objectives such as KL divergence and Wasserstein distance to preserve node proximities.
  • Enhancements: Adversarial training methods like ARGA/ARVGA and NetRA enhance the robustness and generalization capacity of GAEs by integrating GANs.

Graph Reinforcement Learning

Graph RL approaches, such as GCPN and MolGAN, utilize reinforcement learning principles to generate and parse graph structures under non-differentiable constraints:

  • Generation and Prediction: Methods like GTPN predict chemical reactions, while GAM and DeepPath focus on graph classification and reasoning in knowledge graphs. The reinforcement learning paradigm proves beneficial in tasks requiring sequential decision-making or when dealing with complex, multi-step objectives.

Graph Adversarial Methods

Adversarial methods applied to graphs include both adversarial attacks and adversarial training:

  • Adversarial Training: Methods such as GraphGAN, ANE, and NetGAN employ GANs to improve the robustness and performance of graph embeddings and generative models.
  • Adversarial Attacks: Techniques like Nettack and the strategies proposed by Dai et al. and Zugner and Gunnemann focus on exposing vulnerabilities in graph-based models, prompting model improvements and enhanced robustness.

Implications and Future Directions

The implications of these advancements are far-reaching, spanning applications in social networks, recommendation systems, biological networks, and numerous other domains. In particular, the integration of domain knowledge and the development of models tailored to specific graph structures (e.g., heterogeneous graphs, signed networks) and dynamic graphs remain critical areas for future research. The interpretability and robustness of these models also demand further attention, especially in safety-critical applications.

In conclusion, the paper by Zhang et al. affirms the significance of deep learning on graphs as a burgeoning field with substantial theoretical and practical potential. By thoroughly categorizing existing methods and highlighting areas for improvement and future exploration, this survey serves as a vital resource for researchers working to harness the power of graph-structured data in deep learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ziwei Zhang (40 papers)
  2. Peng Cui (116 papers)
  3. Wenwu Zhu (104 papers)
Citations (1,256)