Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Entropy-Based Decoding for Retrieval-Augmented Large Language Models (2406.17519v1)

Published 25 Jun 2024 in cs.CL

Abstract: Augmenting LLMs with retrieved external knowledge has proven effective for improving the factual accuracy of generated responses. Despite their success, retrieval-augmented LLMs still face the distractibility issue, where the generated responses are negatively influenced by noise from both external and internal knowledge sources. In this paper, we introduce a novel, training-free decoding method guided by entropy considerations to mitigate this issue. Our approach utilizes entropy-based document-parallel ensemble decoding to prioritize low-entropy distributions from retrieved documents, thereby enhancing the extraction of relevant information of context. Additionally, it incorporates a contrastive decoding mechanism that contrasts the obtained low-entropy ensemble distribution with the high-entropy distribution derived from the model's internal knowledge across layers, which ensures a greater emphasis on reliable external information. Extensive experiments on open-domain question answering datasets demonstrate the superiority of our method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zexuan Qiu (8 papers)
  2. Zijing Ou (21 papers)
  3. Bin Wu (202 papers)
  4. Jingjing Li (98 papers)
  5. Aiwei Liu (42 papers)
  6. Irwin King (170 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets