Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sequence-to-Sequence Learning for Task-oriented Dialogue with Dialogue State Representation (1806.04441v1)

Published 12 Jun 2018 in cs.CL

Abstract: Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base. Conversely, sequence-to-sequence models learn to map dialogue history to the response in current turn without explicit knowledge base querying. In this work, we propose a novel framework that leverages the advantages of classic pipeline and sequence-to-sequence models. Our framework models a dialogue state as a fixed-size distributed representation and use this representation to query a knowledge base via an attention mechanism. Experiment on Stanford Multi-turn Multi-domain Task-oriented Dialogue Dataset shows that our framework significantly outperforms other sequence-to-sequence based baseline models on both automatic and human evaluation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Haoyang Wen (8 papers)
  2. Yijia Liu (19 papers)
  3. Wanxiang Che (152 papers)
  4. Libo Qin (77 papers)
  5. Ting Liu (329 papers)
Citations (54)