Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Pro: A Self-Prompt and Tuning Framework for Graph Neural Networks (2310.10362v3)

Published 16 Oct 2023 in cs.LG and cs.AI

Abstract: Graphs have become an important modeling tool for web applications, and Graph Neural Networks (GNNs) have achieved great success in graph representation learning. However, the performance of traditional GNNs heavily relies on a large amount of supervision. Recently, pre-train, fine-tune'' has become the paradigm to address the issues of label dependency and poor generalization. However, the pre-training strategies vary for graphs with homophily and heterophily, and the objectives for various downstream tasks also differ. This leads to a gap between pretexts and downstream tasks, resulting innegative transfer'' and poor performance. Inspired by prompt learning in NLP, many studies turn to bridge the gap and fully leverage the pre-trained model. However, existing methods for graph prompting are tailored to homophily, neglecting inherent heterophily on graphs. Meanwhile, most of them rely on the randomly initialized prompts, which negatively impact on the stability. Therefore, we propose Self-Prompt, a prompting framework for graphs based on the model and data itself. We first introduce asymmetric graph contrastive learning for pretext to address heterophily and align the objectives of pretext and downstream tasks. Then we reuse the component from pre-training phase as the self adapter and introduce self-prompts based on graph itself for task adaptation. Finally, we conduct extensive experiments on 11 benchmark datasets to demonstrate its superiority. We provide our codes at https://github.com/gongchenghua/Self-Pro.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. Beyond low-frequency information in graph convolutional networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 3950–3957.
  2. Aleksandar Bojchevski and Stephan Günnemann. 2017. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. arXiv preprint arXiv:1707.03815 (2017).
  3. Protein function prediction via graph kernels. Bioinformatics 21, suppl_1 (2005), i47–i56.
  4. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
  5. A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE transactions on knowledge and data engineering 30, 9 (2018), 1616–1637.
  6. A simple framework for contrastive learning of visual representations. In International conference on machine learning. PMLR, 1597–1607.
  7. Convolutional networks on graphs for learning molecular fingerprints. Advances in neural information processing systems 28 (2015).
  8. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723 (2020).
  9. Neural message passing for quantum chemistry. In International conference on machine learning. PMLR, 1263–1272.
  10. Ppt: Pre-trained prompt tuning for few-shot learning. arXiv preprint arXiv:2109.04332 (2021).
  11. Inductive representation learning on large graphs. Advances in neural information processing systems 30 (2017).
  12. Kaveh Hassani and Amir Hosein Khasahmadi. 2020. Contrastive multi-view representation learning on graphs. In International conference on machine learning. PMLR, 4116–4126.
  13. Graphmae: Self-supervised masked graph autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 594–604.
  14. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265 (2019).
  15. Gpt-gnn: Generative pre-training of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1857–1867.
  16. Kexin Huang and Marinka Zitnik. 2020. Graph meta learning via local subgraphs. Advances in neural information processing systems 33 (2020), 5862–5874.
  17. Self-supervised learning on graphs: Deep insights and new direction. arXiv preprint arXiv:2006.10141 (2020).
  18. Thomas N Kipf and Max Welling. 2016a. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
  19. Thomas N Kipf and Max Welling. 2016b. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016).
  20. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691 (2021).
  21. SeeGera: Self-supervised Semi-implicit Graph Variational Auto-encoders with Masking. In Proceedings of the ACM Web Conference 2023. 143–153.
  22. Finding global homophily in graph neural networks when meeting heterophily. In International Conference on Machine Learning. PMLR, 13242–13256.
  23. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190 (2021).
  24. P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 61–68.
  25. Beyond smoothing: Unsupervised graph representation learning with edge heterophily discriminating. In Proceedings of the AAAI conference on artificial intelligence, Vol. 37. 4516–4524.
  26. Node-wise localization of graph neural networks. arXiv preprint arXiv:2110.14322 (2021).
  27. Graphprompt: Unifying pre-training and downstream tasks for graph neural networks. In Proceedings of the ACM Web Conference 2023. 417–428.
  28. Learning to pre-train graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, Vol. 35. 4276–4284.
  29. Geom-gcn: Geometric graph convolutional networks. arXiv preprint arXiv:2002.05287 (2020).
  30. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 1150–1160.
  31. Ryan Rossi and Nesreen Ahmed. 2015. The network data repository with interactive graph analytics and visualization. In Proceedings of the AAAI conference on artificial intelligence, Vol. 29.
  32. Timo Schick and Hinrich Schütze. 2020. Exploiting cloze questions for few shot text classification and natural language inference. arXiv preprint arXiv:2001.07676 (2020).
  33. Collective classification in network data. AI magazine 29, 3 (2008), 93–93.
  34. Towards out-of-distribution generalization: A survey. arXiv preprint arXiv:2108.13624 (2021).
  35. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000 (2019).
  36. Gppt: Graph pre-training and prompt tuning to generalize graph neural networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1717–1727.
  37. All in One: Multi-Task Prompting for Graph Neural Networks. (2023).
  38. Breaking the limit of graph neural networks by improving the assortativity of graphs with local mixing patterns. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 1541–1551.
  39. Graph auto-encoder via neighborhood wasserstein reconstruction. arXiv preprint arXiv:2202.09025 (2022).
  40. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
  41. Deep graph infomax. arXiv preprint arXiv:1809.10341 (2018).
  42. Afec: Active forgetting of negative transfer in continual learning. Advances in Neural Information Processing Systems 34 (2021), 22379–22391.
  43. Faith: Few-shot graph classification with hierarchical task graphs. arXiv preprint arXiv:2205.02435 (2022).
  44. How Powerful are Graph Neural Networks?. In International Conference on Learning Representations. https://openreview.net/forum?id=ryGs6iA5Km
  45. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems 31 (2018).
  46. Graph contrastive learning with augmentations. Advances in neural information processing systems 33 (2020), 5812–5823.
  47. Muhan Zhang and Yixin Chen. 2018. Link prediction based on graph neural networks. Advances in neural information processing systems 31 (2018).
  48. Factual probing is [mask]: Learning vs. learning to recall. arXiv preprint arXiv:2104.05240 (2021).
  49. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in neural information processing systems 33 (2020), 7793–7804.
  50. Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131 (2020).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chenghua Gong (7 papers)
  2. Xiang Li (1003 papers)
  3. Jianxiang Yu (16 papers)
  4. Cheng Yao (5 papers)
  5. Jiaqi Tan (11 papers)
  6. Chengcheng Yu (6 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.