Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Prompt Learning: A Comprehensive Survey and Beyond (2311.16534v1)

Published 28 Nov 2023 in cs.AI

Abstract: AGI has revolutionized numerous fields, yet its integration with graph data, a cornerstone in our interconnected world, remains nascent. This paper presents a pioneering survey on the emerging domain of graph prompts in AGI, addressing key challenges and opportunities in harnessing graph data for AGI applications. Despite substantial advancements in AGI across natural language processing and computer vision, the application to graph data is relatively underexplored. This survey critically evaluates the current landscape of AGI in handling graph data, highlighting the distinct challenges in cross-modality, cross-domain, and cross-task applications specific to graphs. Our work is the first to propose a unified framework for understanding graph prompt learning, offering clarity on prompt tokens, token structures, and insertion patterns in the graph domain. We delve into the intrinsic properties of graph prompts, exploring their flexibility, expressiveness, and interplay with existing graph models. A comprehensive taxonomy categorizes over 100 works in this field, aligning them with pre-training tasks across node-level, edge-level, and graph-level objectives. Additionally, we present, ProG, a Python library, and an accompanying website, to support and advance research in graph prompting. The survey culminates in a discussion of current challenges and future directions, offering a roadmap for research in graph prompting within AGI. Through this comprehensive analysis, we aim to catalyze further exploration and practical applications of AGI in graph data, underlining its potential to reshape AGI fields and beyond. ProG and the website can be accessed by \url{https://github.com/WxxShirley/Awesome-Graph-Prompt}, and \url{https://github.com/sheldonresearch/ProG}, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xiangguo Sun (30 papers)
  2. Jiawen Zhang (92 papers)
  3. Xixi Wu (13 papers)
  4. Hong Cheng (74 papers)
  5. Yun Xiong (41 papers)
  6. Jia Li (380 papers)
Citations (39)

Summary

Graph Prompt Learning: A Comprehensive Survey and Beyond

This paper presents a thorough survey on the nascent domain of graph prompt learning within the context of AGI. As AGI has significantly advanced in areas such as NLP and Computer Vision (CV), integrating AGI techniques with graph data presents unique challenges and opportunities. Despite advancements, applications of AGI on graph data remain under-explored. This survey sets out to address this gap, providing a structured evaluation of how AGI can best be adapted and applied to graph data, and it establishes a foundational framework for understanding graph prompt learning.

Unified Framework and Design

The authors propose a unified framework that dissects graph prompt learning into three core components: prompt tokens, token structures, and inserting patterns. This framework aids in systematically understanding how different methods have implemented graph prompts. The analysis encompasses over 100 existing works, categorized according to node-level, edge-level, and graph-level objectives. This comprehensive classification helps clarify the ways graph prompts are applied in various pre-training tasks.

Key Findings and Contributions

The paper makes several significant contributions:

  1. Taxonomy and Analysis: By presenting a novel taxonomy, the survey organizes and compares diverse methodologies, providing researchers with a comprehensive view of existing approaches and their applicability across different graph tasks.
  2. Nature of Graph Prompts: A detailed exploration of the relationships between graph prompts and existing models highlights the flexibility and expressiveness of prompts, offering insights into why they succeed in graph contexts where traditional fine-tuning may falter.
  3. Practical Tools: The authors introduce ProG—a Python library—and a dedicated website to facilitate research dissemination and encourage further exploration in graph prompt learning.
  4. Roadmap for Future Research: Current challenges and potential future directions in graph prompting research are outlined, offering guidance for systematic advancements in this interdisciplinary field.

Challenges and Future Directions

The survey identifies challenges such as the scaling of graph models and the lack of intuitive evaluation metrics for graph prompts. It also highlights the need for more generalized, transferable, and theoretically grounded prompt designs. Future works might focus on developing large-scale graph models akin to LLMs and exploring the cross-domain adaptability of graph prompts.

Implications and Speculations

The implications of graph prompt learning are far-reaching. By redefining how downstream tasks are approached, prompts offer a scalable way to leverage pre-trained models across various graph domains and tasks. This could pave the way for more personalized and context-aware applications, particularly in areas like social networks, recommendation systems, and knowledge management.

The potential for AGI techniques to transform graph data analysis depends on overcoming these outlined challenges. As prompts become more integral to processing complex, interconnected data, they stand to reshape not only AGI fields but also broader data science landscapes. The survey effectively catalyzes future exploration and highlights practical applications, underscoring the transformative potential of integrating graph prompts with AGI.