Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning (2503.19470v2)

Published 25 Mar 2025 in cs.AI and cs.CL

Abstract: LLMs have shown remarkable capabilities in reasoning, exemplified by the success of OpenAI-o1 and DeepSeek-R1. However, integrating reasoning with external search processes remains challenging, especially for complex multi-hop questions requiring multiple retrieval steps. We propose ReSearch, a novel framework that trains LLMs to Reason with Search via reinforcement learning without using any supervised data on reasoning steps. Our approach treats search operations as integral components of the reasoning chain, where when and how to perform searches is guided by text-based thinking, and search results subsequently influence further reasoning. We train ReSearch on Qwen2.5-7B(-Instruct) and Qwen2.5-32B(-Instruct) models and conduct extensive experiments. Despite being trained on only one dataset, our models demonstrate strong generalizability across various benchmarks. Analysis reveals that ReSearch naturally elicits advanced reasoning capabilities such as reflection and self-correction during the reinforcement learning process.

Overview of "ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning"

The paper "ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning" presents a novel framework designed to enhance the reasoning capabilities of LLMs when integrated with external search processes. Traditionally, LLMs have demonstrated notable proficiency in tasks demanding internal knowledge synthesis derived from pre-training data. However, challenges arise when these models are required to incorporate external information through search mechanisms to address complex multi-hop questions necessitating numerous retrieval steps. ReSearch seeks to address these challenges by employing reinforcement learning (RL) to train LLMs without recourse to supervised data related to reasoning steps.

Methodology

The authors propose treating search operations as integral components of the reasoning process, augmenting the LLM's capabilities in both retrieval and synthesis of data. This method relies on RL, specifically using Group Relative Policy Optimization (GRPO), enabling LLMs to refine their reasoning abilities without needing explicit supervision. By defining a reasoning framework wherein search operations are informed by text-based thinking, ReSearch orchestrates the interaction between the LLM's reasoning and external search results. This interaction is critical as it dictates not only when searches are conducted but how these searches impact subsequent reasoning processes.

Framework Details

In ReSearch, the search operations are embedded within a reasoning chain, marked by specific tags indicating thinking, searching, and retrieving results. These tags delineate each stage and guide the learning model to comprehend and construct reasoning processes autonomously. The training commences without any labeled reasoning chains, allowing models to naturally acquire reasoning capabilities through reward signal optimization derived from final answer outcomes.

Experimental Setup

The framework was tested on multiple benchmarks including HotpotQA, 2WikiMultiHopQA, MuSiQue, and Bamboogle. Upon evaluation, the ReSearch-trained models demonstrated significant improvements over baseline methods, namely naive generation and retrieval-augmented generation strategies. For instance, models trained using ReSearch showed an absolute improvement ranging from 8.9% to 22.4% in terms of accuracy across these datasets, highlighting the efficacy of the framework.

Implications and Future Directions

ReSearch introduces substantial advancements in integrating reasoning and retrieval processes within LLMs, setting a precedent for further intellectual exploration in refining multi-step reasoning methodologies. Its ability to generalize across diverse question formats and datasets indicates a robust and versatile model that could revolutionize the way researchers approach and tackle complex problem-solving in AI. Future research could expand upon this framework to incorporate additional external tools and explore diverse domains, further enhancing the reasoning capabilities of LLMs.

In conclusion, the ReSearch framework exemplifies a promising leap towards harmonizing LLMs with external search processes, allowing them to transcend beyond conventional knowledge synthesis and into territories requiring dynamic, multi-hop reasoning and data retrieval operations, facilitated by reinforcement learning principles.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Mingyang Chen (45 papers)
  2. Tianpeng Li (14 papers)
  3. Haoze Sun (21 papers)
  4. Yijie Zhou (16 papers)
  5. Chenzheng Zhu (3 papers)
  6. Fan Yang (878 papers)
  7. Zenan Zhou (24 papers)
  8. Weipeng Chen (56 papers)
  9. Haofen Wang (32 papers)
  10. Jeff Z. Pan (78 papers)
  11. Wen Zhang (170 papers)
  12. Huajun Chen (198 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com