Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

IM-RAG: Multi-Round Retrieval-Augmented Generation Through Learning Inner Monologues (2405.13021v1)

Published 15 May 2024 in cs.CL, cs.AI, and cs.IR

Abstract: Although the Retrieval-Augmented Generation (RAG) paradigms can use external knowledge to enhance and ground the outputs of LLMs to mitigate generative hallucinations and static knowledge base problems, they still suffer from limited flexibility in adopting Information Retrieval (IR) systems with varying capabilities, constrained interpretability during the multi-round retrieval process, and a lack of end-to-end optimization. To address these challenges, we propose a novel LLM-centric approach, IM-RAG, that integrates IR systems with LLMs to support multi-round RAG through learning Inner Monologues (IM, i.e., the human inner voice that narrates one's thoughts). During the IM process, the LLM serves as the core reasoning model (i.e., Reasoner) to either propose queries to collect more information via the Retriever or to provide a final answer based on the conversational context. We also introduce a Refiner that improves the outputs from the Retriever, effectively bridging the gap between the Reasoner and IR modules with varying capabilities and fostering multi-round communications. The entire IM process is optimized via Reinforcement Learning (RL) where a Progress Tracker is incorporated to provide mid-step rewards, and the answer prediction is further separately optimized via Supervised Fine-Tuning (SFT). We conduct extensive experiments with the HotPotQA dataset, a popular benchmark for retrieval-based, multi-step question-answering. The results show that our approach achieves state-of-the-art (SOTA) performance while providing high flexibility in integrating IR modules as well as strong interpretability exhibited in the learned inner monologues.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Diji Yang (10 papers)
  2. Jinmeng Rao (19 papers)
  3. Kezhen Chen (12 papers)
  4. Xiaoyuan Guo (14 papers)
  5. Yawen Zhang (23 papers)
  6. Jie Yang (516 papers)
  7. Yi Zhang (994 papers)
Citations (4)