Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GraphLLM: Boosting Graph Reasoning Ability of Large Language Model (2310.05845v1)

Published 9 Oct 2023 in cs.CL and cs.AI

Abstract: The advancement of LLMs has remarkably pushed the boundaries towards artificial general intelligence (AGI), with their exceptional ability on understanding diverse types of information, including but not limited to images and audio. Despite this progress, a critical gap remains in empowering LLMs to proficiently understand and reason on graph data. Recent studies underscore LLMs' underwhelming performance on fundamental graph reasoning tasks. In this paper, we endeavor to unearth the obstacles that impede LLMs in graph reasoning, pinpointing the common practice of converting graphs into natural language descriptions (Graph2Text) as a fundamental bottleneck. To overcome this impediment, we introduce GraphLLM, a pioneering end-to-end approach that synergistically integrates graph learning models with LLMs. This synergy equips LLMs with the ability to proficiently interpret and reason on graph data, harnessing the superior expressive power of graph learning models. Our empirical evaluations across four fundamental graph reasoning tasks validate the effectiveness of GraphLLM. The results exhibit a substantial average accuracy enhancement of 54.44%, alongside a noteworthy context reduction of 96.45% across various graph reasoning tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ziwei Chai (8 papers)
  2. Tianjie Zhang (10 papers)
  3. Liang Wu (138 papers)
  4. Kaiqiao Han (8 papers)
  5. Xiaohai Hu (4 papers)
  6. Xuanwen Huang (11 papers)
  7. Yang Yang (883 papers)
Citations (44)
X Twitter Logo Streamline Icon: https://streamlinehq.com