Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can LLM Graph Reasoning Generalize beyond Pattern Memorization? (2406.15992v2)

Published 23 Jun 2024 in cs.CL

Abstract: LLMs demonstrate great potential for problems with implicit graphical structures, while recent works seek to enhance the graph reasoning capabilities of LLMs through specialized instruction tuning. The resulting 'graph LLMs' are evaluated with in-distribution settings only, thus it remains underexplored whether LLMs are learning generalizable graph reasoning skills or merely memorizing patterns in the synthetic training data. To this end, we propose the NLGift benchmark, an evaluation suite of LLM graph reasoning generalization: whether LLMs could go beyond semantic, numeric, structural, reasoning patterns in the synthetic training data and improve utility on real-world graph-based tasks. Extensive experiments with two LLMs across four graph reasoning tasks demonstrate that while generalization on simple patterns (semantic, numeric) is somewhat satisfactory, LLMs struggle to generalize across reasoning and real-world patterns, casting doubt on the benefit of synthetic graph tuning for real-world tasks with underlying network structures. We explore three strategies to improve LLM graph reasoning generalization, and we find that while post-training alignment is most promising for real-world tasks, empowering LLM graph reasoning to go beyond pattern memorization remains an open research question.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yizhuo Zhang (7 papers)
  2. Heng Wang (136 papers)
  3. Shangbin Feng (53 papers)
  4. Zhaoxuan Tan (35 papers)
  5. Xiaochuang Han (23 papers)
  6. Tianxing He (36 papers)
  7. Yulia Tsvetkov (142 papers)
Citations (9)
X Twitter Logo Streamline Icon: https://streamlinehq.com