Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Incorporating Heterophily into Graph Neural Networks for Graph Classification (2203.07678v2)

Published 15 Mar 2022 in cs.LG and cs.SI

Abstract: Graph Neural Networks (GNNs) often assume strong homophily for graph classification, seldom considering heterophily, which means connected nodes tend to have different class labels and dissimilar features. In real-world scenarios, graphs may have nodes that exhibit both homophily and heterophily. Failing to generalize to this setting makes many GNNs underperform in graph classification. In this paper, we address this limitation by identifying three effective designs and develop a novel GNN architecture called IHGNN (short for Incorporating Heterophily into Graph Neural Networks). These designs include the combination of integration and separation of the ego- and neighbor-embeddings of nodes, adaptive aggregation of node embeddings from different layers, and differentiation between different node embeddings for constructing the graph-level readout function. We empirically validate IHGNN on various graph datasets and demonstrate that it outperforms the state-of-the-art GNNs for graph classification.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. Breaking the limits of message passing graph neural networks. In ICML. PMLR, 2021.
  2. Beyond low-frequency information in graph convolutional networks. In AAAI, pages 3950–3957, 2021.
  3. On the expressive power of query languages for matrices. ACM TRANSACTIONS ON DATABASE SYSTEMS, 2019.
  4. GRAND: graph neural diffusion. In ICML. PMLR, 2021.
  5. Simplified graph convolution with heterophily. NeurIPS, 2022.
  6. Adaptive universal generalized pagerank graph neural network. In ICLR. OpenReview.net, 2021.
  7. Convolutional neural networks on graphs with fast localized spectral filtering. In NeurIPS, 2016.
  8. Relational attention: Generalizing transformers for graph-structured tasks. ICLR, 2023.
  9. A fair comparison of graph neural networks for graph classification. In ICLR. OpenReview.net, 2020.
  10. Kergnns: Interpretable graph neural networks with graph kernels. In AAAI. AAAI Press, 2022.
  11. On graph kernels: Hardness results and efficient alternatives. In Computational Learning Theory and Kernel Machines. Springer, 2003.
  12. Floris Geerts. On the expressive power of linear algebra on graphs. Theory of Computing Systems, 2021.
  13. Neural message passing for quantum chemistry. In ICML. PMLR, 2017.
  14. Inductive representation learning on large graphs. In NeurIPS, 2017.
  15. Ogb-lsc: A large-scale challenge for machine learning on graphs, 2021. URL https://arxiv. org/abs/2103.09430.
  16. ICLR. Structpool: Structured graph pooling via conditional random fields. 2020.
  17. Marginalized kernels between labeled graphs. In ICML, 2003.
  18. Pure transformers are powerful graph learners. NeurIPS, pages 14582–14595, 2022.
  19. Adam: A method for stochastic optimization. In ICLR, 2015.
  20. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
  21. AA Leman and B Weisfeiler. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsiya, 1968.
  22. Pc-conv: Unifying homophily and heterophily with two-fold filtering. In AAAI, pages 13437–13445, 2024.
  23. Finding global homophily in graph neural networks when meeting heterophily. In ICML. PMLR, 2022.
  24. Unsupervised hierarchical graph pooling via substructure-sensitive mutual information maximization. In CIKM, pages 1299–1308, 2022.
  25. Revisiting heterophily for graph neural networks. NeurIPS, 2022.
  26. Provably powerful graph networks. Advances in Neural Information Processing Systems, 2019.
  27. Birds of a feather: Homophily in social networks. pages 415–444. JSTOR, 2001.
  28. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI. AAAI Press, 2019.
  29. Propagation kernels: efficient graph kernels from propagated information. Machine Learning, 2016.
  30. Polynomial-based graph convolutional neural networks for graph classification. Machine Learning, 2022.
  31. Geom-gcn: Geometric graph convolutional networks. In ICLR. OpenReview.net, 2020.
  32. The graph neural network model. IEEE Transactions on Neural Networks, 2009.
  33. Fast subtree kernels on graphs. In NeurIPS. Curran Associates, Inc., 2009.
  34. Weisfeiler-lehman graph kernels. JMLR, 2011.
  35. Breaking the limit of graph neural networks by improving the assortativity of graphs with local mixing patterns. In SIGKDD, pages 1541–1551, 2021.
  36. Towards scale-invariant graph-related problem solving by iterative homogeneous gnns. NeurIPS, pages 15811–15822, 2020.
  37. Graph attention networks. ICLR, 2017.
  38. Dcgnn: Adaptive deep graph convolution for heterophily graphs. Information Sciences, page 120427, 2024.
  39. How powerful are graph neural networks? In ICLR. OpenReview.net, 2019.
  40. Representation learning on graphs with jumping knowledge networks. In ICML. PMLR, 2018.
  41. Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks. In ICDM, pages 1287–1292. IEEE, 2022.
  42. Hierarchical graph capsule network. In AAAI, volume 35, pages 10603–10611, 2021.
  43. Do transformers really perform badly for graph representation? pages 28877–28888, 2021.
  44. Hierarchical graph representation learning with differentiable pooling. In NeurIPS, 2018.
  45. Deep sets. NeurIPS, 2017.
  46. An end-to-end deep learning architecture for graph classification. In AAAI. AAAI Press, 2018.
  47. Graph neural networks for graphs with heterophily: A survey. arXiv preprint arXiv:2202.07082, 2022.
  48. Beyond homophily in graph neural networks: Current limitations and effective designs. In NeurIPS, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jiayi Yang (7 papers)
  2. Sourav Medya (36 papers)
  3. Wei Ye (110 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets