Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generate-then-Ground in Retrieval-Augmented Generation for Multi-hop Question Answering (2406.14891v2)

Published 21 Jun 2024 in cs.CL and cs.IR

Abstract: Multi-Hop Question Answering (MHQA) tasks present a significant challenge for LLMs due to the intensive knowledge required. Current solutions, like Retrieval-Augmented Generation, typically retrieve potential documents from an external corpus to read an answer. However, the performance of this retrieve-then-read paradigm is constrained by the retriever and the inevitable noise in the retrieved documents. To mitigate these challenges, we introduce a novel generate-then-ground (GenGround) framework, synergizing the parametric knowledge of LLMs and external documents to solve a multi-hop question. GenGround empowers LLMs to alternate two phases until the final answer is derived: (1) formulate a simpler, single-hop question and directly generate the answer; (2) ground the question-answer pair in retrieved documents, amending any wrong predictions in the answer. We also propose an instructional grounding distillation method to generalize our method into smaller models. Extensive experiments conducted on four datasets illustrate the superiority of our method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhengliang Shi (15 papers)
  2. Weiwei Sun (93 papers)
  3. Shen Gao (49 papers)
  4. Pengjie Ren (95 papers)
  5. Zhumin Chen (78 papers)
  6. Zhaochun Ren (117 papers)
Citations (13)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets