Learning Fine-Grained Grounded Citations for Attributed Large Language Models (2408.04568v1)
Abstract: Despite the impressive performance on information-seeking tasks, LLMs still struggle with hallucinations. Attributed LLMs, which augment generated text with in-line citations, have shown potential in mitigating hallucinations and improving verifiability. However, current approaches suffer from suboptimal citation quality due to their reliance on in-context learning. Furthermore, the practice of citing only coarse document identifiers makes it challenging for users to perform fine-grained verification. In this work, we introduce FRONT, a training framework designed to teach LLMs to generate Fine-Grained Grounded Citations. By grounding model outputs in fine-grained supporting quotes, these quotes guide the generation of grounded and consistent responses, not only improving citation quality but also facilitating fine-grained verification. Experiments on the ALCE benchmark demonstrate the efficacy of FRONT in generating superior grounded responses and highly supportive citations. With LLaMA-2-7B, the framework significantly outperforms all the baselines, achieving an average of 14.21% improvement in citation quality across all datasets, even surpassing ChatGPT.
- Lei Huang (175 papers)
- Xiaocheng Feng (54 papers)
- Weitao Ma (12 papers)
- Yuxuan Gu (17 papers)
- Weihong Zhong (15 papers)
- Xiachong Feng (28 papers)
- Weijiang Yu (23 papers)
- Weihua Peng (12 papers)
- Duyu Tang (65 papers)
- Dandan Tu (25 papers)
- Bing Qin (186 papers)