Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey on The Expressive Power of Graph Neural Networks (2003.04078v4)

Published 9 Mar 2020 in cs.LG and stat.ML

Abstract: Graph neural networks (GNNs) are effective machine learning models for various graph learning problems. Despite their empirical successes, the theoretical limitations of GNNs have been revealed recently. Consequently, many GNN models have been proposed to overcome these limitations. In this survey, we provide a comprehensive overview of the expressive power of GNNs and provably powerful variants of GNNs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Ryoma Sato (33 papers)
Citations (165)

Summary

Overview of Graph Neural Networks' Expressive Power

Graph Neural Networks (GNNs) have emerged as a powerful tool in machine learning, specifically tailored for graph-structured data. Despite their impressive empirical performance across various domains — ranging from biochemical applications to social network analysis — understanding and enhancing their theoretical expressive power remains an area of active research. This survey comprehensively investigates the capabilities and limitations inherent to GNNs, alongside discussing advanced variants that address these limitations.

Expressive Limitations of GNNs

The foundational GNN models often struggle with distinguishing non-isomorphic graphs that share similar structural properties. Central to this limitation is the inability of GNNs to differentiate between certain graph pairs, primarily due to the constraints imposed by their message-passing architecture. For instance, standard GNNs fail to recognize distinct molecular structures represented by kk-regular graphs, as these networks typically produce identical node embeddings for graphs that have similar local structure despite being dissimilar globally. This phenomenon directly challenges the universality of GNNs in approximating any graph function, contrasting sharply with the universal approximation capabilities of MLPs.

Enhancing GNN Expressivity Through WL Correspondence

Recent works by Xu et al. and Morris et al. establish a critical correspondence between GNNs and the Weisfeiler-Lehman (WL) graph isomorphism tests. Specifically, they highlight that while traditional GNN architectures align with the 1-dimensional WL test, newly proposed variants such as Graph Isomorphic Networks (GINs) achieve the theoretical limit of expressivity corresponding to the 1-WL test. GINs employ injective aggregation and update functions, which make them capable of differentiating between graphs that are non-isomorphic as per the 1-WL test. Beyond this, other models like kk-dimensional GNNs propose the use of higher-order graph representations akin to kk-WL tests, thus enabling these networks to capture more complex graph structures.

Connections to Distributed Local Algorithms

The relationship between GNNs and distributed local algorithms provides another angle to assess their expressive power. GNNs can simulate specific algorithms that solve localized combinatorial problems by approximating distributed processes that occur across graph nodes. This equivalence allows the use of distributed algorithm theory to dictate the practical limitations of GNNs, such as the approximation ratios achievable for vertex cover and dominating set problems. While GNNs exhibit strong performance in this field, enhancements incorporating port numbering and randomized features further improve their computational efficiency and problem-solving capabilities.

Practical and Theoretical Implications

The detailed paper of GNNs' expressive limits presents significant implications for their application in solving graph-related problems. On the practical side, employing more sophisticated variants like kk-GNNs or enhanced models with random features allows better problem-solving within a feasible computational budget. Theoretically, understanding these limitations guides future research towards improving GNN architectures, potentially influencing developments in areas like algorithmic graph theory and logic-based graph processing models.

Speculation on Future Developments

Looking ahead, the field anticipates continued improvements in GNN architectures, driven by deeper insights from graph theory, distributed computing, and complexity analyses. Innovations might include more efficient approximations of WL-test capabilities in GNNs, advanced treatments of node features to overcome current limitations, and explorations into the integration of GNNs within broader AI systems. These advancements will further the reach of GNNs across new application domains, solidifying their role in graph-structured data processing.