Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ChatRetriever: Adapting Large Language Models for Generalized and Robust Conversational Dense Retrieval (2404.13556v1)

Published 21 Apr 2024 in cs.IR and cs.CL

Abstract: Conversational search requires accurate interpretation of user intent from complex multi-turn contexts. This paper presents ChatRetriever, which inherits the strong generalization capability of LLMs to robustly represent complex conversational sessions for dense retrieval. To achieve this, we propose a simple and effective dual-learning approach that adapts LLM for retrieval via contrastive learning while enhancing the complex session understanding through masked instruction tuning on high-quality conversational instruction tuning data. Extensive experiments on five conversational search benchmarks demonstrate that ChatRetriever substantially outperforms existing conversational dense retrievers, achieving state-of-the-art performance on par with LLM-based rewriting approaches. Furthermore, ChatRetriever exhibits superior robustness in handling diverse conversational contexts. Our work highlights the potential of adapting LLMs for retrieval with complex inputs like conversational search sessions and proposes an effective approach to advance this research direction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kelong Mao (23 papers)
  2. Chenlong Deng (7 papers)
  3. Haonan Chen (49 papers)
  4. Fengran Mo (35 papers)
  5. Zheng Liu (312 papers)
  6. Tetsuya Sakai (30 papers)
  7. Zhicheng Dou (113 papers)
Citations (8)
X Twitter Logo Streamline Icon: https://streamlinehq.com