Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RAG-Gym: Systematic Optimization of Language Agents for Retrieval-Augmented Generation (2502.13957v2)

Published 19 Feb 2025 in cs.CL and cs.AI

Abstract: Retrieval-augmented generation (RAG) has shown great promise for knowledge-intensive tasks and recently advanced with agentic RAG, where language agents engage in multi-round interactions with external knowledge sources for adaptive information retrieval. However, existing agentic RAG methods often depend on ad-hoc prompt engineering and lack a unified optimization framework. We introduce RAG-Gym, a comprehensive platform that systematically explores three optimization dimensions: (1) prompt engineering, (2) actor tuning, and (3) critic training. For prompt engineering, we propose Re$2$Search, a novel agent incorporating reasoning reflection that significantly outperforms standard prompts. In actor tuning, we evaluate three popular post-training algorithms with fine-grained process supervision and identify direct preference optimization as the most effective. We further demonstrate that a trained critic can enhance inference by selecting higher-quality intermediate reasoning steps. Together, these findings lead to the optimized Re$2$Search++ agent, which surpasses most recent methods like Search-R1 by a relative increase of 3.2% to 11.6% in average F1. Finally, we examine the impact of different reward sources and analyze scaling properties in training and inference, offering practical insights for agentic RAG optimization. The project homepage is available at https://rag-gym.github.io.

RAG-Gym: Optimizing Reasoning and Search Agents with Process Supervision

The paper "RAG-Gym: Optimizing Reasoning and Search Agents with Process Supervision" introduces a robust framework designed to enhance the efficacy of information-seeking agents by integrating retrieval-augmented generation (RAG) with process supervision mechanisms. This work addresses key limitations in traditional RAG architectures, primarily their dependence on static retrieval processes, which restricts their utility in handling complex, sequential information-requiring tasks commonly seen in multi-hop question-answering scenarios.

The authors propose the RAG-Gym framework, which re-envisions the process of knowledge-intensive question answering as a nested Markov Decision Process (MDP). This structure divides the task into an outer MDP, which orchestrates high-level actions interacting with an information retrieval (IR) environment, and an inner MDP that manages the detailed token generation within LLMs. Such an approach allows the incorporation of fine-grained process supervision, thus optimizing language agent policies through iterative assessments of intermediate steps rather than solely through final outcome evaluations.

A key innovation presented is the ReSearch agent, which unifies the reasoning of answers with the generation of search queries, thereby ensuring that retrieval actions directly contribute to answer formulation. The ReSearch architecture strategically leverages refined answer reasoning to identify knowledge gaps in a question, driving search queries that specifically aim to fill these gaps. This contrasts markedly with existing agents like ReAct, which depend heavily on heuristic-driven prompts that may not generalize seamlessly across diverse tasks.

Empirical evaluations conducted over datasets such as HotpotQA, 2WikiMultihopQA, Bamboogle, and MedQA indicate the superiority of RAG-Gym and ReSearch. These include a 25.6% performance improvement over baseline metrics. The paper highlights the notable effectiveness of the proposed process reward models, demonstrating significant advancements in answer accuracy and reasoning robustness when trained with finely annotated process data derived from LLM outputs like GPT-4o.

Furthermore, the framework is shown to facilitate substantial transferability of trained reward models across various LLM implementations, indicating their utility in optimizing proprietary models where direct parameter tuning might be constrained. The exploration of the scaling properties of both the training and inference phases within this context provides additional insights into the effectiveness of RAG-Gym across variable operational scales.

In conclusion, this paper offers significant contributions to the field of machine learning by presenting a comprehensive framework—RAG-Gym—that bridges current gaps in retrieval-augmented generation for complex, multi-hop reasoning tasks. The proposed combination of a nested MDP approach with process-level supervision offers a paradigm shift in how information-seeking agents are trained and optimized, potentially setting a new standard for future AI research and application in diverse, knowledge-intensive domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Guangzhi Xiong (18 papers)
  2. Qiao Jin (74 papers)
  3. Xiao Wang (507 papers)
  4. Yin Fang (32 papers)
  5. Haolin Liu (31 papers)
  6. Yifan Yang (578 papers)
  7. Fangyuan Chen (5 papers)
  8. Zhixing Song (1 paper)
  9. Dengyu Wang (1 paper)
  10. Minjia Zhang (54 papers)
  11. Zhiyong Lu (113 papers)
  12. Aidong Zhang (49 papers)
Github Logo Streamline Icon: https://streamlinehq.com

HackerNews