Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Neural Prompting with Large Language Models (2309.15427v2)

Published 27 Sep 2023 in cs.CL, cs.AI, and cs.LG

Abstract: LLMs have shown remarkable generalization capability with exceptional performance in various LLMing tasks. However, they still exhibit inherent limitations in precisely capturing and returning grounded knowledge. While existing work has explored utilizing knowledge graphs (KGs) to enhance LLMing via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost. Therefore, how to enhance pre-trained LLMs using grounded knowledge, e.g., retrieval-augmented generation, remains an open question. In this work, we propose Graph Neural Prompting (GNP), a novel plug-and-play method to assist pre-trained LLMs in learning beneficial knowledge from KGs. GNP encompasses various designs, including a standard graph neural network encoder, a cross-modality pooling module, a domain projector, and a self-supervised link prediction objective. Extensive experiments on multiple datasets demonstrate the superiority of GNP on both commonsense and biomedical reasoning tasks across different LLM sizes and settings. Code is available at https://github.com/meettyj/GNP.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (63)
  1. Palm 2 technical report. arXiv preprint arXiv:2305.10403.
  2. Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering. In ACL Workshop on Matching Entities.
  3. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023.
  4. Piqa: Reasoning about physical commonsense in natural language. In AAAI.
  5. Bodenreider, O. 2004. The unified medical language system (UMLS): integrating biomedical terminology. Nucleic acids research.
  6. Translating embeddings for modeling multi-relational data. In NeurIPS.
  7. Language models are few-shot learners. In NeurIPS.
  8. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
  9. MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering. arXiv preprint arXiv:2310.05007.
  10. Reason out Your Layout: Evoking the Layout Master from Large Language Models for Text-to-Image Synthesis. arXiv preprint arXiv:2311.17126.
  11. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
  12. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.
  13. Scalable multi-hop relational reasoning for knowledge-aware question answering. In EMNLP.
  14. DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer. arXiv preprint arXiv:2312.03724.
  15. Universal language model fine-tuning for text classification. In ACL.
  16. LoRA: Low-Rank Adaptation of Large Language Models. In ICLR.
  17. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE transactions on neural networks and learning systems.
  18. Survey of hallucination in natural language generation. ACM Computing Surveys.
  19. Pubmedqa: A dataset for biomedical research question answering. In EMNLP.
  20. Jointgt: Graph-text joint representation learning for text generation from knowledge graphs. In ACL-IJCNLP.
  21. CrowdGraph: A Crowdsourcing Multi-Modal Knowledge Graph Approach to Explainable Fauxtography Detection. Proceedings of the ACM on Human-Computer Interaction.
  22. The power of scale for parameter-efficient prompt tuning. In EMNLP.
  23. Retrieval-augmented generation for knowledge-intensive nlp tasks. In NeurIPS.
  24. Prefix-tuning: Optimizing continuous prompts for generation. In ACL-IJCNLP.
  25. Kagnet: Knowledge-aware graph networks for commonsense reasoning. In EMNLP.
  26. RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge. In ACL-IJCNLP.
  27. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys.
  28. Learn to explain: Multimodal reasoning via thought chains for science question answering. In NeurIPS.
  29. Graph-based reasoning over heterogeneous external knowledge for commonsense question answering. In AAAI.
  30. Can a suit of armor conduct electricity? a new dataset for open book question answering. In EMNLP.
  31. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In ACL.
  32. OpenAI. 2023. GPT-4 Technical Report. arXiv preprint arXiv:2303.08774.
  33. Unifying Large Language Models and Knowledge Graphs: A Roadmap. arXiv preprint arXiv:2306.08302.
  34. Lego: Latent execution-guided reasoning for multi-hop question answering on knowledge graphs. In ICML.
  35. Leveraging large language models for multiple choice question answering. In ICLR.
  36. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100.
  37. Mededit: Model editing for medical question answering with external knowledge bases. arXiv preprint arXiv:2309.16035.
  38. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990.
  39. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI.
  40. Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. arXiv preprint arXiv:2107.02137.
  41. Positive-unlabeled learning with adversarial data augmentation for knowledge graph completion. In IJCAI.
  42. Heterogeneous Graph Masked Autoencoders. In AAAI.
  43. Learning MLPs on Graphs: A Unified View of Effectiveness, Robustness, and Efficiency. In ICLR.
  44. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
  45. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition. BMC bioinformatics.
  46. Graph attention networks. In ICLR.
  47. Improving natural language inference using external knowledge in the science questions domain. In AAAI.
  48. Graph neural networks: Self-supervised learning. Graph Neural Networks: Foundations, Frontiers, and Applications.
  49. Knowledge Graph Prompting for Multi-Document Question Answering. arXiv preprint arXiv:2308.11730.
  50. Finetuned language models are zero-shot learners. In ICLR.
  51. Emergent abilities of large language models. Transactions on Machine Learning Research.
  52. LLMRec: Large Language Models with Graph Augmentation for Recommendation. In WSDM.
  53. A learning algorithm for continually running fully recurrent neural networks. Neural computation.
  54. A Reusable Model-agnostic Framework for Faithfully Explainable Recommendation and System Scrutability. ACM Transactions on Information Systems.
  55. Embedding entities and relations for learning and inference in knowledge bases. In ICLR.
  56. Deep bidirectional language-knowledge graph pretraining. In NeurIPS.
  57. Linkbert: Pretraining language models with document links. In ACL.
  58. QA-GNN: Reasoning with language models and knowledge graphs for question answering. In NAACL.
  59. Jaket: Joint pre-training of knowledge graph and language understanding. In AAAI.
  60. Benchmarking large language models for news summarization. arXiv preprint arXiv:2301.13848.
  61. GreaseLM: Graph REASoning Enhanced Language Models for Question Answering. In ICLR.
  62. A survey of large language models. arXiv preprint arXiv:2303.18223.
  63. Retrieving and reading: A comprehensive survey on open-domain question answering. arXiv preprint arXiv:2101.00774.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yijun Tian (29 papers)
  2. Huan Song (13 papers)
  3. Zichen Wang (46 papers)
  4. Haozhu Wang (10 papers)
  5. Ziqing Hu (6 papers)
  6. Fang Wang (116 papers)
  7. Nitesh V. Chawla (111 papers)
  8. Panpan Xu (15 papers)
Citations (32)
Github Logo Streamline Icon: https://streamlinehq.com