Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ground Every Sentence: Improving Retrieval-Augmented LLMs with Interleaved Reference-Claim Generation (2407.01796v1)

Published 1 Jul 2024 in cs.CL

Abstract: Retrieval-Augmented Generation (RAG) has been widely adopted to enhance LLMs in knowledge-intensive tasks. Recently, Attributed Text Generation (ATG) has attracted growing attention, which provides citations to support the model's responses in RAG, so as to enhance the credibility of LLM-generated content and facilitate verification. Prior methods mainly adopt coarse-grained attributions, linking to passage-level references or providing paragraph-level citations. However, these methods still fall short in verifiability and require certain time costs for fact checking. This paper proposes a fine-grained ATG method called ReClaim(Refer & Claim), which alternates the generation of references and answers step by step. Unlike traditional coarse-grained attribution, ReClaim allows the model to add sentence-level fine-grained citations to each answer sentence in long-form question-answering tasks. Our experiments encompass various training and inference methods and multiple LLMs, verifying the effectiveness of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Sirui Xia (4 papers)
  2. Xintao Wang (132 papers)
  3. Jiaqing Liang (62 papers)
  4. Yifei Zhang (167 papers)
  5. Weikang Zhou (10 papers)
  6. Jiaji Deng (2 papers)
  7. Fei Yu (76 papers)
  8. Yanghua Xiao (151 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com