Can we Soft Prompt LLMs for Graph Learning Tasks? (2402.10359v2)
Abstract: Graph plays an important role in representing complex relationships in real-world applications such as social networks, biological data and citation networks. In recent years, LLMs have achieved tremendous success in various domains, which makes applying LLMs to graphs particularly appealing. However, directly applying LLMs to graph modalities presents unique challenges due to the discrepancy and mismatch between the graph and text modalities. Hence, to further investigate LLMs' potential for comprehending graph information, we introduce GraphPrompter, a novel framework designed to align graph information with LLMs via soft prompts. Specifically, GraphPrompter consists of two main components: a graph neural network to encode complex graph information and an LLM that effectively processes textual information. Comprehensive experiments on various benchmark datasets under node classification and link prediction tasks demonstrate the effectiveness of our proposed method. The GraphPrompter framework unveils the substantial capabilities of LLMs as predictors in graph-related tasks, enabling researchers to utilize LLMs across a spectrum of real-world graph scenarios more effectively.
- ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings. arXiv preprint arXiv:2305.14321 (2023).
- Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
- Quantifying memorization across neural language models. arXiv preprint arXiv:2202.07646 (2022).
- Graphllm: Boosting graph reasoning ability of large language model. arXiv preprint arXiv:2310.05845 (2023).
- Palm: Scaling language modeling with pathways. Journal of Machine Learning Research 24, 240 (2023), 1–113.
- Explanations as Features: LLM-Based Features for Text-Attributed Graphs. arXiv preprint arXiv:2305.19523 (2023).
- Training compute-optimal large language models. arXiv preprint arXiv:2203.15556 (2022).
- Solving quantitative reasoning problems with language models. Advances in Neural Information Processing Systems 35 (2022), 3843–3857.
- Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022).
- Train Your Own GNN Teacher: Graph-Aware Distillation on Textual Graphs. arXiv preprint arXiv:2304.10668 (2023).
- Neighborhood contrastive learning for scientific document representations with citation embeddings. arXiv preprint arXiv:2202.06671 (2022).
- Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744.
- Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
- Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023).
- Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
- Albert Webson and Ellie Pavlick. 2021. Do prompt-based models really understand the meaning of their prompts? arXiv preprint arXiv:2109.01247 (2021).
- Zhihao Wen and Yuan Fang. 2023. Augmenting Low-Resource Text Classification with Graph-Grounded Pre-training and Prompting. arXiv preprint arXiv:2305.03324 (2023).
- GraphFormers: GNN-nested transformers for representation learning on textual graph. Advances in Neural Information Processing Systems 34 (2021), 28798–28810.
- Empower text-attributed graphs learning with large language models (llms). arXiv preprint arXiv:2310.09872 (2023).
- Greaselm: Graph reasoning enhanced language models for question answering. arXiv preprint arXiv:2201.08860 (2022).
- Minimally-supervised structure-rich text categorization via learning on text-rich networks. In Proceedings of the Web Conference 2021. 3258–3268.
- Metadata-induced contrastive learning for zero-shot multi-label text classification. In Proceedings of the ACM Web Conference 2022. 3162–3173.
- Learning on large-scale text-attributed graphs via variational inference. arXiv preprint arXiv:2210.14709 (2022).
- Zheyuan Liu (35 papers)
- Xiaoxin He (14 papers)
- Yijun Tian (29 papers)
- Nitesh V. Chawla (111 papers)