Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Subgraph Pooling: Tackling Negative Transfer on Graphs (2402.08907v2)

Published 14 Feb 2024 in cs.LG, cs.AI, and cs.SI

Abstract: Transfer learning aims to enhance performance on a target task by using knowledge from related tasks. However, when the source and target tasks are not closely aligned, it can lead to reduced performance, known as negative transfer. Unlike in image or text data, we find that negative transfer could commonly occur in graph-structured data, even when source and target graphs have semantic similarities. Specifically, we identify that structural differences significantly amplify the dissimilarities in the node embeddings across graphs. To mitigate this, we bring a new insight in this paper: for semantically similar graphs, although structural differences lead to significant distribution shift in node embeddings, their impact on subgraph embeddings could be marginal. Building on this insight, we introduce Subgraph Pooling (SP) by aggregating nodes sampled from a k-hop neighborhood and Subgraph Pooling++ (SP++) by a random walk, to mitigate the impact of graph structural differences on knowledge transfer. We theoretically analyze the role of SP in reducing graph discrepancy and conduct extensive experiments to evaluate its superiority under various settings. The proposed SP methods are effective yet elegant, which can be easily applied on top of any backbone Graph Neural Networks (GNNs). Our code and data are available at: https://github.com/Zehong-Wang/Subgraph-Pooling.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. Size-invariant graph representations for graph classification extrapolations. In ICML, 2021.
  2. When to pre-train graph neural networks? from data generation perspective! In KDD, 2023.
  3. Learning causally invariant representations for out-of-distribution generalization on graphs. NeurIPS, 2022.
  4. Tree mover’s distance: Bridging graph metrics and stability of graph neural networks. In NeurIPS, 2022.
  5. Towards robust graph neural networks for noisy graphs with sparse labels. In WSDM, 2022.
  6. Graph transfer learning via adversarial domain adaptation with graph convolution. TKDE, 2022.
  7. Fast graph representation learning with pytorch geometric. arXiv, 2019.
  8. Domain-adversarial training of neural networks. JMLR, 2016.
  9. Utilizing graph machine learning within drug discovery and development. Briefings in bioinformatics, 2021.
  10. A kernel method for the two-sample-problem. NeurIPS, 2006.
  11. GOOD: A graph out-of-distribution benchmark. In NeurIPS, 2022.
  12. A data-centric framework to endow graph neural networks with out-of-distribution detection ability. In KDD, 2023.
  13. Inductive representation learning on large graphs. In NeurIPS, 2017.
  14. Adaptive transfer learning on graph neural networks. In KDD, 2021.
  15. G-mixup: Graph data augmentation for graph classification. In ICML, 2022.
  16. Lightgcn: Simplifying and powering graph convolution network for recommendation. In SIGIR, 2020.
  17. Open graph benchmark: Datasets for machine learning on graphs. In NeurIPS, 2020.
  18. A broader picture of random-walk based graph embedding. In KDD, 2021.
  19. Tackling over-smoothing for general graph convolutional networks. TPAMI, 2023.
  20. Empowering graph representation learning with test-time graph transformation. In ICLR, 2023.
  21. Nicolas Keriven. Not too little, not too much: a theoretical analysis of graph (over)smoothing. In LoG, 2022.
  22. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
  23. Self-attention graph pooling. In ICML, 2019.
  24. Transferability of spectral graph convolutional neural networks. JMLR, 2021.
  25. Independence promoted graph disentangled networks. In AAAI, 2020.
  26. Local augmentation for graph neural networks. In ICML, 2022.
  27. Structural re-weighting improves graph domain adaptation. In ICML, 2023.
  28. Good-d: On unsupervised graph out-of-distribution detection. In WSDM, 2023.
  29. Conditional adversarial domain adaptation. In NeurIPS, 2018.
  30. Disentangled graph convolutional networks. In ICML, 2019.
  31. Benign, tempered, or catastrophic: Toward a refined taxonomy of overfitting. In NeurIPS, 2022.
  32. Gcc: Graph contrastive coding for graph neural network pre-training. In KDD, 2020.
  33. struc2vec: Learning node representations from structural identity. In KDD, 2017.
  34. Multi-scale attributed node embedding. Journal of Complex Networks, 2021.
  35. Graphon neural networks and the transferability of graph neural networks. In NeurIPS, 2020.
  36. Improving graph domain adaptation with network hierarchy. In CIKM, 2023.
  37. All in one: Multi-task prompting for graph neural networks. In KDD, 2023.
  38. Graph attention networks. In ICLR, 2018.
  39. Graphmix: Improved training of gnns for semi-supervised learning. In AAAI, 2021.
  40. Characterizing and avoiding negative transfer. In CVPR, 2019.
  41. Mixup for node and graph classification. In WWW, 2021.
  42. Anti-money laundering in bitcoin: Experimenting with graph convolutional networks for financial forensics. arXiv, 2019.
  43. Assaying out-of-distribution generalization in transfer learning. In NeurIPS, 2022.
  44. Simplifying graph convolutional networks. In ICML, 2019.
  45. Unsupervised domain adaptive graph convolutional networks. In WWW, 2020.
  46. Structural entropy guided graph hierarchical pooling. In ICML, 2022.
  47. Handling distribution shifts on graphs: An invariance perspective. In ICLR, 2022.
  48. Discovering invariant rationales for graph neural networks. In ICLR, 2022.
  49. Non-iid transfer learning on graphs. In AAAI, 2023.
  50. Energy-based out-of-distribution detection for graph neural networks. In ICLR, 2023.
  51. Graph domain adaptation via theory-grounded spectral regularization. In ICLR, 2023.
  52. Mind the label shift of augmentation-based graph ood generalization. In CVPR, 2023.
  53. Central moment discrepancy (CMD) for domain-invariant representation learning. In ICLR, 2017.
  54. Dane: domain adaptive network embedding. In IJCAI, 2019.
  55. A survey on negative transfer. IEEE/CAA Journal of Automatica Sinica, 2022.
  56. Pairnorm: Tackling oversmoothing in gnns. In ICLR, 2020.
  57. Shift-robust gnns: Overcoming the limitations of localized graph training data. In NeurIPS, 2021.
  58. Transfer learning of graph neural networks with ego-graph information maximization. In NeurIPS, 2021.
  59. Explaining and adapting graph conditional shift. arXiv, 2023.
  60. A comprehensive survey on transfer learning. Proceedings of the IEEE, 2020.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets