Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Relate to Previous Turns in Conversational Search (2306.02553v1)

Published 5 Jun 2023 in cs.IR and cs.CL

Abstract: Conversational search allows a user to interact with a search system in multiple turns. A query is strongly dependent on the conversation context. An effective way to improve retrieval effectiveness is to expand the current query with historical queries. However, not all the previous queries are related to, and useful for expanding the current query. In this paper, we propose a new method to select relevant historical queries that are useful for the current query. To cope with the lack of labeled training data, we use a pseudo-labeling approach to annotate useful historical queries based on their impact on the retrieval results. The pseudo-labeled data are used to train a selection model. We further propose a multi-task learning framework to jointly train the selector and the retriever during fine-tuning, allowing us to mitigate the possible inconsistency between the pseudo labels and the changed retriever. Extensive experiments on four conversational search datasets demonstrate the effectiveness and broad applicability of our method compared with several strong baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Fengran Mo (35 papers)
  2. Jian-Yun Nie (70 papers)
  3. Kaiyu Huang (16 papers)
  4. Kelong Mao (23 papers)
  5. Yutao Zhu (63 papers)
  6. Peng Li (390 papers)
  7. Yang Liu (2253 papers)
Citations (18)