Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization (2405.02816v1)

Published 5 May 2024 in cs.CL, cs.IR, and cs.LG

Abstract: This paper introduces Stochastic RAG--a novel approach for end-to-end optimization of retrieval-augmented generation (RAG) models that relaxes the simplifying assumptions of marginalization and document independence, made in most prior work. Stochastic RAG casts the retrieval process in RAG as a stochastic sampling without replacement process. Through this formulation, we employ straight-through Gumbel-top-k that provides a differentiable approximation for sampling without replacement and enables effective end-to-end optimization for RAG. We conduct extensive experiments on seven diverse datasets on a wide range of tasks, from open-domain question answering to fact verification to slot-filling for relation extraction and to dialogue systems. By applying this optimization method to a recent and effective RAG model, we advance state-of-the-art results on six out of seven datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Hamed Zamani (88 papers)
  2. Michael Bendersky (63 papers)
Citations (11)

Summary

"Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization" introduces Stochastic RAG, a groundbreaking approach for the optimization of retrieval-augmented generation (RAG) models. Traditional RAG models often depend on simplifying assumptions such as marginalization and document independence, which can limit their performance. This paper aims to address these limitations by presenting a novel formulation that frames the retrieval process as a stochastic sampling without replacement.

The authors use straight-through Gumbel-top-k, a differentiable approximation technique for sampling without replacement, enabling effective end-to-end optimization for retrieval-augmented generation. This method bypasses the need for marginalized distributions, leading to a more direct and integrated optimization process.

To validate the effectiveness of Stochastic RAG, the authors conducted extensive experiments across seven diverse datasets. These datasets cover a broad spectrum of tasks, including:

  • Open-domain question answering
  • Fact verification
  • Slot-filling for relation extraction
  • Dialogue systems

Through these experiments, Stochastic RAG demonstrated significant advancements in performance, achieving state-of-the-art results in six out of the seven datasets evaluated. This highlights not only the versatility of the approach across various applications but also its robustness in dealing with different types of language generation and retrieval tasks.

Overall, the paper makes notable contributions by relaxing restrictive assumptions and providing a robust and versatile framework for RAG model optimization. This innovation has the potential to significantly enhance the performance and applicability of RAG models across a wider range of tasks and datasets.