Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Response Selection in Multi-Turn Dialogue Systems by Incorporating Domain Knowledge (1809.03194v3)

Published 10 Sep 2018 in cs.AI, cs.CL, and cs.LG

Abstract: Building systems that can communicate with humans is a core problem in Artificial Intelligence. This work proposes a novel neural network architecture for response selection in an end-to-end multi-turn conversational dialogue setting. The architecture applies context level attention and incorporates additional external knowledge provided by descriptions of domain-specific words. It uses a bi-directional Gated Recurrent Unit (GRU) for encoding context and responses and learns to attend over the context words given the latent response representation and vice versa.In addition, it incorporates external domain specific information using another GRU for encoding the domain keyword descriptions. This allows better representation of domain-specific keywords in responses and hence improves the overall performance. Experimental results show that our model outperforms all other state-of-the-art methods for response selection in multi-turn conversations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Debanjan Chaudhuri (9 papers)
  2. Agustinus Kristiadi (28 papers)
  3. Jens Lehmann (80 papers)
  4. Asja Fischer (63 papers)
Citations (26)