Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Search-in-the-Chain: Interactively Enhancing Large Language Models with Search for Knowledge-intensive Tasks (2304.14732v7)

Published 28 Apr 2023 in cs.CL

Abstract: Making the content generated by LLM, accurate, credible and traceable is crucial, especially in complex knowledge-intensive tasks that require multi-step reasoning and each step needs knowledge to solve. Retrieval-augmented generation is good potential to solve this problem. However, where and how to introduce Information Retrieval (IR) to LLM is a big challenge. Previous work has the problems that wrong knowledge retrieved by IR misleads the LLM and interaction between IR and LLM breaks the reasoning chain of LLM. This paper proposes a novel framework named \textbf{Search-in-the-Chain} (SearChain) for the interaction between LLM and IR to solve the challenges. First, LLM generates the reasoning chain named Chain-of-Query (CoQ) where each node consists of an IR-oriented query-answer pair. Second, IR verifies the answer of each node of CoQ. It corrects the answer that is not consistent with the retrieved information when IR gives high confidence, which improves the credibility. Third, LLM can indicate its missing knowledge in CoQ and rely on IR to provide this knowledge to LLM. These operations improve the accuracy in terms of reasoning and knowledge. Finally, SearChain generates the reasoning process and marks references to supporting documents for each reasoning step, which improves traceability. Interaction with IR in SearChain forms a novel reasoning path based on a tree, which enables LLM to dynamically modify the direction of reasoning. Experiments show that SearChain outperforms state-of-the-art baselines on complex knowledge-intensive tasks including multi-hop Q&A, slot filling, fact checking, and long-form Q&A.

Enhancing LLMs with Search for Knowledge-intensive Tasks

The paper "Search-in-the-Chain: Interactively Enhancing LLMs with Search for Knowledge-intensive Tasks" addresses a prominent challenge faced by LLMs in executing complex, knowledge-intensive tasks that require multi-step reasoning. LLMs, though successful in various NLP tasks, often encounter difficulties in compositional reasoning, memorization of real-time knowledge, and avoiding hallucinations. These limitations impede their application in scenarios demanding high accuracy, credibility, and traceability.

Framework Overview

The authors introduce a novel framework called Search-in-the-Chain (SearChain), which synergizes Information Retrieval (IR) with LLMs to augment their reasoning capabilities without disrupting the coherence of their reasoning chains. The SearChain framework operates through multiple rounds of interaction between IR and LLMs. It involves:

  1. Chain-of-Query (CoQ) Construction: The LLM generates a reasoning chain — CoQ — where each node comprises an IR-oriented query-answer pair. This chain aids in ensuring continuity and coherence in reasoning, allowing the LLM to plan the entire reasoning process before IR involvement.
  2. Verification and Completion by IR:
    • Verification: IR validates each answer in CoQ and corrects discrepancies when confident, thus enhancing the credibility of the responses.
    • Completion: For queries flagged by LLMs as unsolved, IR supplies the necessary information. This selective intervention helps avert potential misleading influences from inaccurate retrievals.
  3. Tree-of-Reasoning (ToR) and Dynamic Adjustment: Unlike traditional methods, SearChain proposes a tree-based dynamic reasoning path, facilitating the LLM to modify its reasoning direction dynamically based on feedback from IR.
  4. Tracing for Traceability: SearChain marks references to supporting documents for each reasoning step, improving the traceability of LLM-generated content.

Experimental Evaluation

The framework is evaluated on a spectrum of complex tasks — multi-hop question-answering, slot filling, fact-checking, and long-form question-answering. SearChain consistently outperforms existing methods such as CoT, Self-Ask, and DSP by leveraging both its enhanced reasoning framework and interactive IR integration. The key results exhibit notable improvements in cover-EM and ROUGE-L metrics, demonstrating efficacy in accuracy and traceability.

Implications and Future Work

The implications of this work are multifaceted. Practically, SearChain offers a robust approach to integrating retrieval mechanisms in LLM workflows, ensuring that generated content is not only accurate and complete but also transparently linkable to its sources. Theoretically, this framework enriches the existing paradigms of retrieval-augmented models by addressing the challenges of coherence and misleading retrievals.

Moving forward, the research community can explore scaling SearChain to other types of LMs and further optimize the interaction strategy between IR and LLMs. Additionally, refining confidence measures in retrieval outputs and enhancing retrieval strategies with more dynamic knowledge graphs could amplify the framework's robustness.

In conclusion, the Search-in-the-Chain framework represents a significant step in harmonizing retrieval processes with LLM reasoning, setting a foundation for future advancements in AI solutions capable of tackling complex, knowledge-intensive tasks with greater confidence and reliability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shicheng Xu (36 papers)
  2. Liang Pang (94 papers)
  3. Huawei Shen (119 papers)
  4. Xueqi Cheng (274 papers)
  5. Tat-Seng Chua (359 papers)
Citations (21)