Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLM-Based Multi-Hop Question Answering with Knowledge Graph Integration in Evolving Environments (2408.15903v2)

Published 28 Aug 2024 in cs.CL

Abstract: The important challenge of keeping knowledge in LLMs up-to-date has led to the development of various methods for incorporating new facts. However, existing methods for such knowledge editing still face difficulties with multi-hop questions that require accurate fact identification and sequential logical reasoning, particularly among numerous fact updates. To tackle these challenges, this paper introduces Graph Memory-based Editing for LLMs (GMeLLo), a straightforward and effective method that merges the explicit knowledge representation of Knowledge Graphs (KGs) with the linguistic flexibility of LLMs. Beyond merely leveraging LLMs for question answering, GMeLLo employs these models to convert free-form language into structured queries and fact triples, facilitating seamless interaction with KGs for rapid updates and precise multi-hop reasoning. Our results show that GMeLLo significantly surpasses current state-of-the-art (SOTA) knowledge editing methods in the multi-hop question answering benchmark, MQuAKE, especially in scenarios with extensive knowledge edits.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Ruirui Chen (12 papers)
  2. Weifeng Jiang (12 papers)
  3. Chengwei Qin (28 papers)
  4. Ishaan Singh Rawal (3 papers)
  5. Cheston Tan (50 papers)
  6. Dongkyu Choi (6 papers)
  7. Bo Xiong (84 papers)
  8. Bo Ai (231 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com