Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 432 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

A Survey of Few-Shot Learning on Graphs: from Meta-Learning to Pre-Training and Prompt Learning (2402.01440v4)

Published 2 Feb 2024 in cs.LG, cs.AI, and cs.SI

Abstract: Graph representation learning, a critical step in graph-centric tasks, has seen significant advancements. Earlier techniques often operate in an end-to-end setting, which heavily rely on the availability of ample labeled data. This constraint has spurred the emergence of few-shot learning on graphs, where only a few labels are available for each task. Given the extensive literature in this field, this survey endeavors to synthesize recent developments, provide comparative insights, and identify future directions. We systematically categorize existing studies based on two major taxonomies: (1) Problem taxonomy, which explores different types of data scarcity problems and their applications, and (2) Technique taxonomy, which details key strategies for addressing these data-scarce few-shot problems. The techniques can be broadly categorized into meta-learning, pre-training, and hybrid approaches, with a finer-grained classification in each category to aid readers in their method selection process. Within each category, we analyze the relationships among these methods and compare their strengths and limitations. Finally, we outline prospective directions for few-shot learning on graphs to catalyze continued innovation in this field. The website for this survey can be accessed by \url{https://github.com/smufang/fewshotgraph}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (57)
  1. Learning to extrapolate knowledge: Transductive few-shot out-of-graph link prediction. NeurIPS, 2020.
  2. Ultra-dp: Unifying graph pre-training with multi-task graph dual prompt. arXiv preprint, 2023.
  3. Graph prototypical networks for few-shot learning on attributed networks. In CIKM, 2020.
  4. Universal prompt tuning for graph neural networks. In NeurIPS, 2023.
  5. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017.
  6. Enhancing graph neural networks with structure-based prompt. arXiv preprint, 2023.
  7. G-adapter: Towards structure-aware parameter-efficient transfer learning for graph transformer networks. arXiv preprint, 2023.
  8. Contrastive multi-view representation learning on graphs. In ICML, 2020.
  9. GraphMAE: Self-supervised masked graph autoencoders. In SIGKDD, 2022.
  10. GraphMAE2: A decoding-enhanced masked self-supervised graph learner. In WWW, 2023.
  11. Parameter-efficient transfer learning for nlp. In ICML, 2019.
  12. Open graph benchmark: Datasets for machine learning on graphs. NeurIPS, 2020.
  13. GPT-GNN: Generative pre-training of graph neural networks. SIGKDD, 2020.
  14. Lora: Low-rank adaptation of large language models. In ICLR, 2021.
  15. Graph meta learning via local subgraphs. NeurIPS, 2020.
  16. Variational graph auto-encoders. arXiv preprint, 2016.
  17. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
  18. Node classification on graphs with few-shot novel labels via meta transformed network embedding. NeurIPS, 2020.
  19. AdapterGNN: Efficient delta tuning improves generalization ability in graph neural networks. arXiv preprint, 2023.
  20. Graphadapter: Tuning vision-language models with dual knowledge graph. In NeurIPS, 2023.
  21. Pre-training molecular graph representation with 3d geometry. arXiv preprint arXiv:2110.07728, 2021.
  22. Relative and absolute location embedding for few-shot node classification on graph. AAAI, 2021.
  23. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 2023.
  24. Multi-modal molecule structure–text model for text-based retrieval and editing. Nature Machine Intelligence, 2023.
  25. GraphPrompt: Unifying pre-training and downstream tasks for graph neural networks. In WWW, 2023.
  26. MolCA: Molecular graph-language modeling with cross-modal projector and uni-modal adapter. In EMNLP, 2023.
  27. One for all: Towards training one graph model for all classification tasks. In ICLR, 2024.
  28. Adaptive-step graph meta-learner for few-shot graph classification. In CIKM, 2020.
  29. Film: Visual reasoning with a general conditioning layer. In AAAI, 2018.
  30. Gcc: Graph contrastive coding for graph neural network pre-training. In SIGKDD, 2020.
  31. Learning transferable visual models from natural language supervision. In ICML, 2021.
  32. Adaptive attentional network for few-shot knowledge graph completion. In EMNLP, 2020.
  33. Prototypical networks for few-shot learning. In NeurIPS, 2017.
  34. A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities. ACM Computing Surveys, 2023.
  35. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In ICLR, 2019.
  36. GPPT: Graph pre-training and prompt tuning to generalize graph neural networks. In SIGKDD, 2022.
  37. All in one: Multi-task prompting for graph neural networks. In SIGKDD, 2023.
  38. Virtual node tuning for few-shot node classification. In SIGKDD, 2023.
  39. Deep graph infomax. In ICLR, 2018.
  40. Few-shot learning: A survey. arXiv preprint, 2019.
  41. Tackling long-tailed relations and uncommon entities in knowledge graph completion. In EMNLP, 2019.
  42. Generalizing from a few examples: A survey on few-shot learning. ACM computing surveys (csur), 2020.
  43. Augmenting low-resource text classification with graph-grounded pre-training and prompting. In SIGIR, 2023.
  44. Prompt tuning on graph-augmented low-resource text classification. arXiv preprint, 2023.
  45. Meta-inductive node classification across graphs. In SIGIR, 2021.
  46. HMNet: Hybrid matching network for few-shot link prediction. In DASFAA, 2021.
  47. Self-supervised representation learning via latent graph prediction. In ICML, 2022.
  48. Few-shot link prediction in dynamic networks. WSDM, 2022.
  49. Graph few-shot learning via knowledge transfer. In AAAI, 2020.
  50. Graph contrastive learning with augmentations. NeurIPS, 2020.
  51. Graph contrastive learning automated. In ICML, 2021.
  52. Hgprompt: Bridging homogeneous and heterogeneous graphs for few-shot prompt learning. arXiv preprint, 2023.
  53. Generalized graph prompt: Toward a unification of pre-training and downstream tasks on graphs. arXiv preprint, 2023.
  54. MultiGPrompt for multi-task pre-training and prompting on graphs. arXiv preprint, 2023.
  55. Few-shot learning on graphs: A survey. In IJCAI, 2022.
  56. Deep graph contrastive representation learning. arXiv preprint, 2020.
  57. SGL-PT: A strong graph learner with graph prompt tuning. arXiv preprint, 2023.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: