Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Comprehensive Survey on Graph Neural Networks (1901.00596v4)

Published 3 Jan 2019 in cs.LG and stat.ML

Abstract: Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zonghan Wu (11 papers)
  2. Shirui Pan (198 papers)
  3. Fengwen Chen (3 papers)
  4. Guodong Long (115 papers)
  5. Chengqi Zhang (74 papers)
  6. Philip S. Yu (592 papers)
Citations (7,629)

Summary

This survey provides a comprehensive overview of Graph Neural Networks (GNNs), addressing the increasing need for deep learning methodologies capable of handling graph-structured data prevalent in numerous domains like social networks, molecular chemistry, recommender systems, and knowledge graphs (Wu et al., 2019 ). It contrasts GNNs with traditional machine learning approaches, which often struggle with the non-Euclidean nature and complex interdependencies inherent in graph data. The survey introduces a taxonomy to structure the rapidly evolving GNN landscape, discusses various GNN frameworks, theoretical underpinnings, practical applications, evaluation methods, and available resources, concluding with challenges and future directions.

GNN Taxonomy

A central contribution of the survey is the proposed taxonomy classifying GNN models into four principal categories: Recurrent Graph Neural Networks (RecGNNs), Convolutional Graph Neural Networks (ConvGNNs), Graph Autoencoders (GAEs), and Spatial-Temporal Graph Neural Networks (STGNNs).

Recurrent Graph Neural Networks (RecGNNs)

RecGNNs represent the pioneering approaches applying recurrent neural architectures to graph data. These models, such as the original GNN* by Scarselli et al. and Graph Echo State Networks (GraphESN), leverage the concept of recursive neighborhood information propagation until node representations converge to a stable equilibrium. The core idea involves iteratively updating a node's hidden state hvh_v based on its features xvx_v, the features of its neighbors xne[v]x_{ne[v]}, and the hidden states of its neighbors hne[v]h_{ne[v]}. A typical update function follows the form:

hv(t+1)=f(xv,xco[v],hne[v](t),xne[v])h_v^{(t+1)} = f(x_v, x_{co[v]}, h_{ne[v]}^{(t)}, x_{ne[v]})

where ff is a parametric function (e.g., a feedforward neural network) and the process repeats until hv(t+1)hv(t)h_v^{(t+1)} \approx h_v^{(t)}. Gated Graph Neural Networks (GGNNs) adapted this concept by using Gated Recurrent Units (GRUs) and unrolling the recurrence for a fixed number of steps TT, obviating the need for convergence checks and enabling backpropagation through time (BPTT). RecGNNs laid the theoretical foundation for message passing on graphs but often suffer from high computational cost and potential convergence issues.

Convolutional Graph Neural Networks (ConvGNNs)

ConvGNNs generalize the convolution operation from regular grids (like images) to irregular graph structures. They are arguably the most prominent GNN category currently, offering efficient and effective hierarchical feature learning. The survey subdivides ConvGNNs into spectral-based and spatial-based approaches.

  • Spectral-based ConvGNNs: These methods define graph convolutions in the spectral domain using the graph Laplacian matrix L=DAL = D - A (or its normalized variants). Early models like Spectral CNN performed eigendecomposition of the Laplacian, which is computationally expensive and not inherently localized. ChebNet approximated spectral filters using Chebyshev polynomials, improving localization and efficiency. Graph Convolutional Network (GCN) further simplified ChebNet by restricting the filter to operate only on 1-hop neighbors, leading to a highly efficient layer-wise propagation rule:

    H(l+1)=σ(D^12A^D^12H(l)W(l))H^{(l+1)} = \sigma(\hat{D}^{-\frac{1}{2}}\hat{A}\hat{D}^{-\frac{1}{2}} H^{(l)} W^{(l)})

    where A^=A+I\hat{A} = A + I, D^\hat{D} is the diagonal degree matrix of A^\hat{A}, H(l)H^{(l)} is the matrix of node activations in layer ll, and W(l)W^{(l)} is the trainable weight matrix. Spectral methods possess a strong theoretical grounding in graph signal processing but can struggle with scalability to large graphs and transferring learned filters to graphs with different structures.

  • Spatial-based ConvGNNs: These methods define convolutions directly based on a node's spatial neighborhood structure, aggregating information from neighbors. This approach is generally more flexible, efficient, and scalable than spectral methods. The core operation involves an AGGREGATE function followed by an UPDATE function. Examples include GraphSAGE, which uses sampling strategies (e.g., uniform neighbor sampling) and various aggregation functions (mean, LSTM, pooling) for scalability. Graph Attention Networks (GAT) introduce attention mechanisms to assign different importance weights to neighbors during aggregation, improving expressiveness:

    hi=σ(jNi{i}αijWhj)h_i' = \sigma\left(\sum_{j \in \mathcal{N}_i \cup \{i\}} \alpha_{ij} W h_j\right)

    where αij\alpha_{ij} are attention coefficients computed based on the features of nodes ii and jj. Other spatial models include Message Passing Neural Networks (MPNN), a general framework encompassing many ConvGNNs, and Graph Isomorphism Network (GIN), which studies the theoretical representational power of GNNs concerning the Weisfeiler-Lehman (WL) test.

Graph Autoencoders (GAEs)

GAEs are unsupervised learning frameworks applying the autoencoder paradigm to graphs. They typically consist of an encoder (often a ConvGNN) that maps nodes to low-dimensional latent representations (embeddings) Z=Encoder(A,X)Z = \text{Encoder}(A, X), and a decoder that reconstructs graph information (e.g., the adjacency matrix) from these embeddings A^=Decoder(Z)\hat{A} = \text{Decoder}(Z). GAEs are primarily used for network embedding (learning unsupervised node representations) and graph generation. Variational Graph Autoencoders (VGAEs) extend GAEs by incorporating a variational inference approach, learning a distribution over latent representations. Models like Structural Deep Network Embedding (SDNE) combine first-order and second-order proximity preservation. For graph generation, models like GraphRNN, MolGAN, and GraphVAE employ GAE principles, often combined with reinforcement learning or adversarial training, to generate novel graph structures with specific desired properties, finding applications in drug discovery and molecular design.

Spatial-Temporal Graph Neural Networks (STGNNs)

STGNNs are designed to handle dynamic graphs where both graph structure and node attributes evolve over time. These models aim to capture spatial dependencies (relationships between nodes at a given time) and temporal dependencies (evolution of node attributes/structure over time) simultaneously. Common applications include traffic flow prediction, action recognition, and modeling evolving social networks. Architectures often integrate GNN components (typically ConvGNNs) to model spatial structure with sequence models (RNNs or temporal CNNs) to capture temporal dynamics. For instance, Diffusion Convolutional Recurrent Neural Network (DCRNN) uses diffusion graph convolutions within an encoder-decoder sequence-to-sequence framework using GRUs. ST-GCN applies graph convolutions followed by 1D temporal convolutions. Graph WaveNet introduces adaptive adjacency matrices learned directly from data and employs dilated causal convolutions for efficient temporal modeling over long ranges. These models are crucial for predictive tasks on time-varying graph data.

General Frameworks and Training

The survey outlines general frameworks based on the target output level (node-level, edge-level, graph-level) and training paradigms (supervised, semi-supervised, unsupervised). Node-level tasks include node classification and regression. Edge-level tasks involve predicting edges (link prediction) or edge attributes. Graph-level tasks include graph classification, regression, and generation.

For graph-level outputs, graph pooling or readout layers are essential. These aggregate node representations from the final GNN layer into a fixed-size graph representation. Common strategies include simple sum/mean/max aggregation, attention-based methods, and learnable pooling schemes like SortPooling (which sorts node embeddings based on a canonical order) and DiffPool (a differentiable hierarchical pooling method that learns cluster assignments).

Training efficiency for large graphs is a critical consideration. Techniques discussed include neighbor sampling (e.g., GraphSAGE) to limit the neighborhood size during aggregation, graph sampling methods, and efficient implementations leveraging sparse matrix operations and optimized libraries like PyTorch Geometric (PyG) and Deep Graph Library (DGL).

Theoretical Aspects

The survey touches upon several theoretical properties of GNNs:

  • Receptive Field: The k-hop neighborhood influencing a node's representation after k GNN layers.
  • VC Dimension: Analyzing the model complexity and generalization capabilities.
  • Graph Isomorphism: Relating the discriminative power of GNNs, particularly message-passing spatial ConvGNNs, to the Weisfeiler-Lehman (WL) graph isomorphism test. GIN showed that GNNs using sum aggregation and MLPs can be as powerful as the 1-WL test.
  • Equivariance/Invariance: Discussing how GNNs can be designed to be permutation equivariant (node representations change consistently with node permutations) or permutation invariant (graph-level representations remain unchanged under node permutations).
  • Universal Approximation: Exploring the capability of GNNs to approximate arbitrary functions on graphs.

Applications, Benchmarks, and Resources

GNNs have found wide application across diverse domains:

  • Computer Vision: Scene graph generation, point cloud classification, action recognition.
  • Natural Language Processing: Relation extraction, semantic parsing, text classification (modeling documents as graphs).
  • Traffic Forecasting: Predicting traffic speed, volume, or density using STGNNs on road networks.
  • Recommender Systems: Modeling user-item interactions as bipartite graphs for recommendation.
  • Chemistry and Biology: Molecular property prediction, drug discovery (modeling molecules as graphs), protein interaction networks.
  • Other areas: Program verification, combinatorial optimization, physics simulations.

The survey lists common benchmark datasets categorized by type (citation networks like Cora, Citeseer, Pubmed; biochemical graphs like MUTAG, QM9; social networks, knowledge graphs). It also discusses evaluation methodologies and potential pitfalls, particularly regarding dataset splits and performance comparisons in node classification. Finally, it points to major open-source libraries like PyG and DGL, which significantly facilitate GNN implementation and experimentation.

Challenges and Future Directions

Despite rapid progress, several challenges remain:

  • Shallow Architectures: Deeper GNNs often suffer from over-smoothing (node representations becoming indistinguishable) and vanishing gradients.
  • Dynamic Graphs: Efficiently handling graphs whose structure and features change rapidly over time remains complex.
  • Non-Structural Information: Incorporating information beyond graph topology, such as temporal or spatial relationships not explicitly encoded in edges.
  • Scalability: Applying GNNs to web-scale graphs with billions of nodes and edges requires further advances in sampling, parallelization, and distributed training.
  • Heterogeneous Graphs: Developing effective GNNs for graphs with multiple node and edge types.

Future research is expected to focus on addressing these limitations, exploring deeper architectures, developing more sophisticated models for dynamic and heterogeneous graphs, improving scalability, and further investigating the theoretical foundations of GNNs.

Conclusion

The survey "A Comprehensive Survey on Graph Neural Networks" (Wu et al., 2019 ) provides a structured and detailed overview of the GNN field as of early 2019. Its taxonomy effectively categorizes existing models, while the discussion covers fundamental concepts, key architectures, theoretical underpinnings, practical applications, and important resources. It serves as a valuable reference for understanding the landscape of GNNs and identifying pertinent challenges and opportunities for future research and development in applying deep learning to graph-structured data.

Youtube Logo Streamline Icon: https://streamlinehq.com