Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Hybrid Retrieval-Generation Neural Conversation Model (1904.09068v2)

Published 19 Apr 2019 in cs.IR and cs.CL

Abstract: Intelligent personal assistant systems that are able to have multi-turn conversations with human users are becoming increasingly popular. Most previous research has been focused on using either retrieval-based or generation-based methods to develop such systems. Retrieval-based methods have the advantage of returning fluent and informative responses with great diversity. However, the performance of the methods is limited by the size of the response repository. On the other hand, generation-based methods can produce highly coherent responses on any topics. But the generated responses are often generic and not informative due to the lack of grounding knowledge. In this paper, we propose a hybrid neural conversation model that combines the merits of both response retrieval and generation methods. Experimental results on Twitter and Foursquare data show that the proposed model outperforms both retrieval-based methods and generation-based methods (including a recently proposed knowledge-grounded neural conversation model) under both automatic evaluation metrics and human evaluation. We hope that the findings in this study provide new insights on how to integrate text retrieval and text generation models for building conversation systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Liu Yang (194 papers)
  2. Junjie Hu (111 papers)
  3. Minghui Qiu (58 papers)
  4. Chen Qu (37 papers)
  5. Jianfeng Gao (344 papers)
  6. W. Bruce Croft (46 papers)
  7. Xiaodong Liu (162 papers)
  8. Yelong Shen (83 papers)
  9. Jingjing Liu (139 papers)
Citations (85)