Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 156 tok/s Pro
GPT OSS 120B 441 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Which Modality should I use -- Text, Motif, or Image? : Understanding Graphs with Large Language Models (2311.09862v2)

Published 16 Nov 2023 in cs.CL and cs.SI

Abstract: Our research integrates graph data with LLMs, which, despite their advancements in various fields using large text corpora, face limitations in encoding entire graphs due to context size constraints. This paper introduces a new approach to encoding a graph with diverse modalities, such as text, image, and motif, coupled with prompts to approximate a graph's global connectivity, thereby enhancing LLMs' efficiency in processing complex graph structures. The study also presents GraphTMI, a novel benchmark for evaluating LLMs in graph structure analysis, focusing on homophily, motif presence, and graph difficulty. Key findings indicate that the image modality, especially with vision-LLMs like GPT-4V, is superior to text in balancing token limits and preserving essential information and outperforms prior graph neural net (GNN) encoders. Furthermore, the research assesses how various factors affect the performance of each encoding modality and outlines the existing challenges and potential future developments for LLMs in graph understanding and reasoning tasks. All data will be publicly available upon acceptance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
  2. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
  3. Models and methods in social network analysis, volume 28. Cambridge university press.
  4. Exploring the potential of large language models (llms) in learning on graphs. arXiv preprint arXiv:2307.03393.
  5. Benchmarking graph neural networks. arXiv preprint arXiv:2003.00982.
  6. Graphviz—open source graph drawing tools. In Graph Drawing: 9th International Symposium, GD 2001 Vienna, Austria, September 23–26, 2001 Revised Papers 9, pages 483–484. Springer.
  7. Talk like a graph: Encoding graphs for large language models. arXiv preprint arXiv:2310.04560.
  8. Citeseer: An automatic citation indexing system. In Proceedings of the third ACM conference on Digital libraries, pages 89–98.
  9. Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066.
  10. Exploring network structure, dynamics, and function using networkx. Technical report, Los Alamos National Lab.(LANL), Los Alamos, NM (United States).
  11. Inductive representation learning on large graphs. Advances in neural information processing systems, 30.
  12. Explanations as features: Llm-based features for text-attributed graphs. arXiv preprint arXiv:2305.19523.
  13. Paul W Holland and Samuel Leinhardt. 1974. The statistical analysis of local structure in social networks.
  14. Measuring and improving the use of graph information in graph neural networks. arXiv preprint arXiv:2206.13170.
  15. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pages 9118–9147. PMLR.
  16. Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
  17. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199–22213.
  18. Jure Leskovec and Christos Faloutsos. 2006. Sampling from large graphs. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 631–636.
  19. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474.
  20. Automating the construction of internet portals with machine learning. Information Retrieval, 3:127–163.
  21. Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1):415–444.
  22. Prem Melville and Vikas Sindhwani. 2010. Recommender systems. Encyclopedia of machine learning, 1:829–838.
  23. Network motifs: simple building blocks of complex networks. Science, 298(5594):824–827.
  24. Measurement and analysis of online social networks. In Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, pages 29–42.
  25. OpenAI. 2023a. Gpt-4 system card. https://openai.com/research/gpt-4v-system-card.
  26. OpenAI. 2023b. Gpt-4v(ision) system card. https://cdn.openai.com/papers/GPTV_System_Card.pdf.
  27. Graphworld: Fake graphs bring real insights for gnns. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 3691–3701.
  28. Unifying large language models and knowledge graphs: A roadmap. arXiv preprint arXiv:2306.08302.
  29. Collective classification in network data. AI magazine, 29(3):93–93.
  30. Simon Stolz and Christian Schlereth. 2021. Predicting tie strength with ego network structures. Journal of Interactive Marketing, 54(1):40–52.
  31. Network classification in temporal networks using motifs. arXiv preprint arXiv:1807.03733.
  32. Graph attention networks. arXiv preprint arXiv:1710.10903.
  33. Protein structure: insights from graph theory. Journal of Theoretical and Computational Chemistry, 1(01):187–211.
  34. Can language models solve graph problems in natural language? arXiv preprint arXiv:2305.10037.
  35. Bag of tricks for node classification with graph neural networks. arXiv preprint arXiv:2103.13355.
  36. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837.
  37. Qiang Wei and Guangmin Hu. 2022. Evaluating graph neural networks under graph sampling scenarios. PeerJ Computer Science, 8:e901.
  38. Node, motif and subgraph: Leveraging network functional blocks through structural convolution. in 2018 ieee. In ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 47–52.
  39. Examining the effects of degree distribution and homophily in graph learning models. arXiv preprint arXiv:2307.08881.
  40. A survey on multimodal large language models. arXiv preprint arXiv:2306.13549.
  41. Siren’s song in the ai ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219.
Citations (5)

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.