Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GraphEdit: Large Language Models for Graph Structure Learning (2402.15183v4)

Published 23 Feb 2024 in cs.LG and cs.AI

Abstract: Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data by generating novel graph structures. Graph Neural Networks (GNNs) have emerged as promising GSL solutions, utilizing recursive message passing to encode node-wise inter-dependencies. However, many existing GSL methods heavily depend on explicit graph structural information as supervision signals, leaving them susceptible to challenges such as data noise and sparsity. In this work, we propose GraphEdit, an approach that leverages LLMs to learn complex node relationships in graph-structured data. By enhancing the reasoning capabilities of LLMs through instruction-tuning over graph structures, we aim to overcome the limitations associated with explicit graph structural information and enhance the reliability of graph structure learning. Our approach not only effectively denoises noisy connections but also identifies node-wise dependencies from a global perspective, providing a comprehensive understanding of the graph structure. We conduct extensive experiments on multiple benchmark datasets to demonstrate the effectiveness and robustness of GraphEdit across various settings. We have made our model implementation available at: https://github.com/HKUDS/GraphEdit.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. BAAI. 2023. Aquilachat-7b.
  2. Language models are few-shot learners. NeurIPS, 33:1877–1901.
  3. Graph neural networks with adaptive readouts. NeurIPS, 35:19746–19758.
  4. Iterative deep graph learning for graph neural networks: Better and robust node embeddings. NeurIPS, 33:19314–19326.
  5. Exploring the potential of large language models (llms) in learning on graphs. arXiv preprint arXiv:2307.03393.
  6. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality.
  7. Towards robust graph neural networks for noisy graphs with sparse labels. In WSDM, pages 181–191.
  8. Glm: General language model pretraining with autoregressive blank infilling. arXiv preprint arXiv:2103.10360.
  9. Slaps: Self-supervision improves structure learning for graph neural networks. NeurIPS, 34:22667–22681.
  10. Learning discrete structures for graph neural networks. In ICML, pages 1972–1982. PMLR.
  11. Generalization and representational limits of graph neural networks. In ICML, pages 3419–3430. PMLR.
  12. Neighborhood homophily-based graph convolutional network. In CIKM, pages 3908–3912.
  13. Explanations as features: Llm-based features for text-attributed graphs. arXiv preprint arXiv:2305.19523.
  14. Graph-based dependency parsing with graph neural networks. In ACL, pages 2475–2485.
  15. Graph structure learning for robust graph neural networks. In KDD, pages 66–74.
  16. Variational inference for training graph neural networks in low-data regime through joint structure-label estimation. In KDD, pages 824–834.
  17. Reliable representations make a stronger defender: Unsupervised structure refinement for robust gnn. In KDD, pages 925–935.
  18. Restructuring graph for higher homophily via adaptive spectral clustering. In AAAI, volume 37, pages 8622–8630.
  19. Task-adaptive neural process for user cold-start recommendation. In WWW, pages 1306–1316.
  20. Compact graph structure learning via mutual information compression. In WWW, pages 1601–1610.
  21. Towards unsupervised deep graph structure learning. In WWW, pages 1392–1403.
  22. Is homophily a necessity for graph neural networks? In ICLR.
  23. Representation learning with large language models for recommendation. arXiv preprint arXiv:2310.15950.
  24. Dropedge: Towards deep graph convolutional networks on node classification. arXiv preprint arXiv:1907.10903.
  25. Masked label prediction: Unified message passing model for semi-supervised classification. In IJCAI.
  26. Ernie 2.0: A continual pre-training framework for language understanding. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 8968–8975.
  27. Graphgpt: Graph instruction tuning for large language models. arXiv preprint arXiv:2310.13023.
  28. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  29. Graph structure estimation neural networks. In WWW, pages 342–353.
  30. Deconfounded recommendation for alleviating bias amplification. In KDD, pages 1717–1725.
  31. Llmrec: Large language models with graph augmentation for recommendation. arXiv preprint arXiv:2311.00423.
  32. Zhihao Wen and Yuan Fang. 2023. Augmenting low-resource text classification with graph-grounded pre-training and prompting. SIGIR.
  33. Nodeformer: A scalable graph structure learning transformer for node classification. NeurIPS, 35:27387–27401.
  34. Bloom+ 1: Adding language support to bloom for zero-shot prompting. arXiv preprint arXiv:2212.09535.
  35. Graph-revised convolutional network. In ECML/PKDD, pages 378–393.
  36. Empower text-attributed graphs learning with large language models (llms). arXiv preprint arXiv:2310.09872.
  37. Heterogeneous graph structure learning for graph neural networks. In AAAI, volume 35, pages 4697–4705.
  38. Self-supervised graph structure refinement for graph neural networks. In WSDM, pages 159–167.
  39. Data augmentation for graph neural networks. In AAAI, volume 35, pages 11015–11023.
  40. Robust graph representation learning via neural sparsification. In ICML, pages 11458–11468. PMLR.
  41. Opengsl: A comprehensive benchmark for graph structure learning. arXiv preprint arXiv:2306.10280.
  42. Se-gsl: A general and effective graph structure learning framework through structural entropy optimization. In WWW, pages 499–510.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zirui Guo (6 papers)
  2. Lianghao Xia (65 papers)
  3. Yanhua Yu (15 papers)
  4. Yuling Wang (12 papers)
  5. Zixuan Yang (16 papers)
  6. Wei Wei (424 papers)
  7. Liang Pang (94 papers)
  8. Tat-Seng Chua (359 papers)
  9. Chao Huang (244 papers)
Citations (10)
X Twitter Logo Streamline Icon: https://streamlinehq.com