Entropy-Based Decoding for Retrieval-Augmented Large Language Models (2406.17519v1)
Abstract: Augmenting LLMs with retrieved external knowledge has proven effective for improving the factual accuracy of generated responses. Despite their success, retrieval-augmented LLMs still face the distractibility issue, where the generated responses are negatively influenced by noise from both external and internal knowledge sources. In this paper, we introduce a novel, training-free decoding method guided by entropy considerations to mitigate this issue. Our approach utilizes entropy-based document-parallel ensemble decoding to prioritize low-entropy distributions from retrieved documents, thereby enhancing the extraction of relevant information of context. Additionally, it incorporates a contrastive decoding mechanism that contrasts the obtained low-entropy ensemble distribution with the high-entropy distribution derived from the model's internal knowledge across layers, which ensures a greater emphasis on reliable external information. Extensive experiments on open-domain question answering datasets demonstrate the superiority of our method.
- Zexuan Qiu (8 papers)
- Zijing Ou (21 papers)
- Bin Wu (202 papers)
- Jingjing Li (98 papers)
- Aiwei Liu (42 papers)
- Irwin King (170 papers)