Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Role of Long-tail Knowledge in Retrieval Augmented Large Language Models (2406.16367v1)

Published 24 Jun 2024 in cs.IR

Abstract: Retrieval augmented generation (RAG) exhibits outstanding performance in promoting the knowledge capabilities of LLMs with retrieved documents related to user queries. However, RAG only focuses on improving the response quality of LLMs via enhancing queries indiscriminately with retrieved information, paying little attention to what type of knowledge LLMs really need to answer original queries more accurately. In this paper, we suggest that long-tail knowledge is crucial for RAG as LLMs have already remembered common world knowledge during large-scale pre-training. Based on our observation, we propose a simple but effective long-tail knowledge detection method for LLMs. Specifically, the novel Generative Expected Calibration Error (GECE) metric is derived to measure the ``long-tailness'' of knowledge based on both statistics and semantics. Hence, we retrieve relevant documents and infuse them into the model for patching knowledge loopholes only when the input query relates to long-tail knowledge. Experiments show that, compared to existing RAG pipelines, our method achieves over 4x speedup in average inference time and consistent performance improvement in downstream tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Dongyang Li (41 papers)
  2. Junbing Yan (10 papers)
  3. Taolin Zhang (34 papers)
  4. Chengyu Wang (93 papers)
  5. Xiaofeng He (33 papers)
  6. Longtao Huang (27 papers)
  7. Hui Xue (109 papers)
  8. Jun Huang (126 papers)
Citations (2)