Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Weisfeiler-Lehman goes Dynamic: An Analysis of the Expressive Power of Graph Neural Networks for Attributed and Dynamic Graphs (2210.03990v2)

Published 8 Oct 2022 in cs.LG and cs.AI

Abstract: Graph Neural Networks (GNNs) are a large class of relational models for graph processing. Recent theoretical studies on the expressive power of GNNs have focused on two issues. On the one hand, it has been proven that GNNs are as powerful as the Weisfeiler-Lehman test (1-WL) in their ability to distinguish graphs. Moreover, it has been shown that the equivalence enforced by 1-WL equals unfolding equivalence. On the other hand, GNNs turned out to be universal approximators on graphs modulo the constraints enforced by 1-WL/unfolding equivalence. However, these results only apply to Static Attributed Undirected Homogeneous Graphs (SAUHG) with node attributes. In contrast, real-life applications often involve a much larger variety of graph types. In this paper, we conduct a theoretical analysis of the expressive power of GNNs for two other graph domains that are particularly interesting in practical applications, namely dynamic graphs and SAUGHs with edge attributes. Dynamic graphs are widely used in modern applications; hence, the study of the expressive capability of GNNs in this domain is essential for practical reasons and, in addition, it requires a new analyzing approach due to the difference in the architecture of dynamic GNNs compared to static ones. On the other hand, the examination of SAUHGs is of particular relevance since they act as a standard form for all graph types: it has been shown that all graph types can be transformed without loss of information to SAUHGs with both attributes on nodes and edges. This paper considers generic GNN models and appropriate 1-WL tests for those domains. Then, the known results on the expressive power of GNNs are extended to the mentioned domains: it is proven that GNNs have the same capability as the 1-WL test, the 1-WL equivalence equals unfolding equivalence and that GNNs are universal approximators modulo 1-WL/unfolding equivalence.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
Citations (6)

Summary

We haven't generated a summary for this paper yet.