Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Heterophily for Graph Neural Networks (2401.09125v2)

Published 17 Jan 2024 in cs.LG and stat.ML

Abstract: Graphs with heterophily have been regarded as challenging scenarios for Graph Neural Networks (GNNs), where nodes are connected with dissimilar neighbors through various patterns. In this paper, we present theoretical understandings of the impacts of different heterophily patterns for GNNs by incorporating the graph convolution (GC) operations into fully connected networks via the proposed Heterophilous Stochastic Block Models (HSBM), a general random graph model that can accommodate diverse heterophily patterns. Firstly, we show that by applying a GC operation, the separability gains are determined by two factors, i.e., the Euclidean distance of the neighborhood distributions and $\sqrt{\mathbb{E}\left[\operatorname{deg}\right]}$, where $\mathbb{E}\left[\operatorname{deg}\right]$ is the averaged node degree. It reveals that the impact of heterophily on classification needs to be evaluated alongside the averaged node degree. Secondly, we show that the topological noise has a detrimental impact on separability, which is equivalent to degrading $\mathbb{E}\left[\operatorname{deg}\right]$. Finally, when applying multiple GC operations, we show that the separability gains are determined by the normalized distance of the $l$-powered neighborhood distributions. It indicates that the nodes still possess separability as $l$ goes to infinity in a wide range of regimes. Extensive experiments on both synthetic and real-world data verify the effectiveness of our theory.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization. In ICML, pp.  684–693, 2021.
  2. Effects of graph convolutions in multi-layer networks. In ICLR, 2023.
  3. Beyond low-frequency information in graph convolutional networks. In AAAI, volume 35, pp.  3950–3957, 2021.
  4. Adaptive universal generalized pagerank graph neural network. In ICLR, 2021.
  5. Contextual stochastic block models. In NeurIPS, volume 31, 2018.
  6. On the evolution of random graphs. Publ. math. inst. hung. acad. sci, 5(1):17–60, 1960.
  7. Neural message passing for quantum chemistry. In ICML, pp.  1263–1272, 2017.
  8. Inductive representation learning on large graphs. In NeurIPS, pp.  1024–1034, 2017.
  9. Block modeling-guided graph convolutional neural networks. In AAAI, volume 36, pp.  4022–4029, 2022.
  10. Stochastic blockmodels: First steps. Social networks, 5(2):109–137, 1983.
  11. Learnable graph convolutional attention networks. In ICLR, 2023.
  12. Universal graph convolutional networks. In NeurIPS, volume 34, pp.  10654–10664, 2021.
  13. On the universality of graph neural networks on large random graphs. In NeurIPS, pp.  6960–6971, 2021.
  14. Adam: A method for stochastic optimization. In ICLR, 2015.
  15. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
  16. Deeper insights into graph convolutional networks for semi-supervised learning. In AAAI, pp.  3538–3545, 2018.
  17. New benchmarks for learning on non-homophilous graphs. arXiv preprint arXiv:2104.01404, 2021.
  18. Non-local graph neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, pp.  1–1, 2021.
  19. Revisiting heterophily for graph neural networks. In NeurIPS, volume 35, pp.  1362–1375, 2022.
  20. When do graph neural networks help with node classification: Investigating the homophily principle on node distinguishability. In NeurIPS, 2023.
  21. Meta-weight graph neural network: Push the limits beyond global homophily. In WWW, pp.  1270–1280, 2022a.
  22. Is homophily a necessity for graph neural networks? In ICLR, 2022b.
  23. Demystifying structural disparity in graph neural networks: Can one size fit all? arXiv preprint arXiv:2306.01323, 2023.
  24. Graph neural networks exponentially lose expressive power for node classification. In ICLR, 2020.
  25. Geom-gcn: Geometric graph convolutional networks. In ICLR, 2020.
  26. A critical look at the evaluation of gnns under heterophily: are we really making progress? In ICLR, 2023.
  27. Ordered gnn: Ordering message passing to deal with heterophily and over-smoothing. In ICLR, 2023.
  28. Breaking the limit of graph neural networks by improving the assortativity of graphs with local mixing patterns. In ACM SIGKDD, pp.  1541–1551, 2021.
  29. Graph attention networks. In ICLR, 2018.
  30. Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018.
  31. Bi-gcn: Binary graph convolutional network. In IEEE CVPR, pp.  1561–1570, 2021.
  32. Understanding non-linearity in graph neural networks from the bayesian-inference perspective. In NeurIPS, pp.  34024–34038, 2022.
  33. Simplifying graph convolutional networks. In ICML, pp.  6861–6871, 2019.
  34. A non-asymptotic analysis of oversmoothing in graph neural networks. In ICLR, 2023.
  35. How powerful are graph neural networks? In ICLR, 2019.
  36. Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks. In ICDM, pp.  1287–1292, 2022.
  37. Diverse message passing for attribute with heterophily. In NeurIPS, volume 34, pp.  4751–4763, 2021a.
  38. Graph pointer neural networks. arXiv preprint arXiv:2110.00973, 2021b.
  39. Revisiting semi-supervised learning with graph embeddings. In ICML, pp.  40–48, 2016.
  40. Beyond homophily in graph neural networks: Current limitations and effective designs. In NeurIPS, volume 33, pp.  7793–7804, 2020.
Citations (6)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com