Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts (2404.02022v2)

Published 2 Apr 2024 in cs.CL

Abstract: In the era of LLMs, applying techniques such as Retrieval Augmented Generation can better address Open-Domain Question-Answering problems. Due to constraints including model sizes and computing resources, the length of context is often limited, and it becomes challenging to empower the model to cover overlong contexts while answering questions from open domains. This paper proposes a general and convenient method to covering longer contexts in Open-Domain Question-Answering tasks. It leverages a small encoder LLM that effectively encodes contexts, and the encoding applies cross-attention with origin inputs. With our method, the origin LLMs can cover several times longer contexts while keeping the computing requirements close to the baseline. Our experiments demonstrate that after fine-tuning, there is improved performance across two held-in datasets, four held-out datasets, and also in two In Context Learning settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhuo Chen (319 papers)
  2. Xinyu Wang (186 papers)
  3. Yong Jiang (194 papers)
  4. Pengjun Xie (85 papers)
  5. Fei Huang (409 papers)
  6. Kewei Tu (74 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com