Ground Every Sentence: Improving Retrieval-Augmented LLMs with Interleaved Reference-Claim Generation (2407.01796v1)
Abstract: Retrieval-Augmented Generation (RAG) has been widely adopted to enhance LLMs in knowledge-intensive tasks. Recently, Attributed Text Generation (ATG) has attracted growing attention, which provides citations to support the model's responses in RAG, so as to enhance the credibility of LLM-generated content and facilitate verification. Prior methods mainly adopt coarse-grained attributions, linking to passage-level references or providing paragraph-level citations. However, these methods still fall short in verifiability and require certain time costs for fact checking. This paper proposes a fine-grained ATG method called ReClaim(Refer & Claim), which alternates the generation of references and answers step by step. Unlike traditional coarse-grained attribution, ReClaim allows the model to add sentence-level fine-grained citations to each answer sentence in long-form question-answering tasks. Our experiments encompass various training and inference methods and multiple LLMs, verifying the effectiveness of our approach.
- Sirui Xia (4 papers)
- Xintao Wang (132 papers)
- Jiaqing Liang (62 papers)
- Yifei Zhang (167 papers)
- Weikang Zhou (10 papers)
- Jiaji Deng (2 papers)
- Fei Yu (76 papers)
- Yanghua Xiao (151 papers)