Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MLR: A Two-stage Conversational Query Rewriting Model with Multi-task Learning (2004.05812v1)

Published 13 Apr 2020 in cs.CL, cs.AI, and cs.LG

Abstract: Conversational context understanding aims to recognize the real intention of user from the conversation history, which is critical for building the dialogue system. However, the multi-turn conversation understanding in open domain is still quite challenging, which requires the system extracting the important information and resolving the dependencies in contexts among a variety of open topics. In this paper, we propose the conversational query rewriting model - MLR, which is a Multi-task model on sequence Labeling and query Rewriting. MLR reformulates the multi-turn conversational queries into a single turn query, which conveys the true intention of users concisely and alleviates the difficulty of the multi-turn dialogue modeling. In the model, we formulate the query rewriting as a sequence generation problem and introduce word category information via the auxiliary word category label predicting task. To train our model, we construct a new Chinese query rewriting dataset and conduct experiments on it. The experimental results show that our model outperforms compared models, and prove the effectiveness of the word category information in improving the rewriting performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shuangyong Song (18 papers)
  2. Chao Wang (555 papers)
  3. Qianqian Xie (60 papers)
  4. Xinxing Zu (5 papers)
  5. Huan Chen (53 papers)
  6. Haiqing Chen (29 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.