Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Query-bag Matching with Mutual Coverage for Information-seeking Conversations in E-commerce (1911.02747v1)

Published 7 Nov 2019 in cs.CL and cs.IR

Abstract: Information-seeking conversation system aims at satisfying the information needs of users through conversations. Text matching between a user query and a pre-collected question is an important part of the information-seeking conversation in E-commerce. In the practical scenario, a sort of questions always correspond to a same answer. Naturally, these questions can form a bag. Learning the matching between user query and bag directly may improve the conversation performance, denoted as query-bag matching. Inspired by such opinion, we propose a query-bag matching model which mainly utilizes the mutual coverage between query and bag and measures the degree of the content in the query mentioned by the bag, and vice verse. In addition, the learned bag representation in word level helps find the main points of a bag in a fine grade and promotes the query-bag matching performance. Experiments on two datasets show the effectiveness of our model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhenxin Fu (6 papers)
  2. Feng Ji (74 papers)
  3. Wenpeng Hu (8 papers)
  4. Wei Zhou (308 papers)
  5. Dongyan Zhao (144 papers)
  6. Haiqing Chen (29 papers)
  7. Rui Yan (250 papers)
Citations (2)