Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SUMBT+LaRL: Effective Multi-domain End-to-end Neural Task-oriented Dialog System (2009.10447v3)

Published 22 Sep 2020 in cs.CL

Abstract: The recent advent of neural approaches for developing each dialog component in task-oriented dialog systems has remarkably improved, yet optimizing the overall system performance remains a challenge. Besides, previous research on modeling complicated multi-domain goal-oriented dialogs in end-to-end fashion has been limited. In this paper, we present an effective multi-domain end-to-end trainable neural dialog system SUMBT+LaRL that incorporates two previous strong models and facilitates them to be fully differentiable. Specifically, the SUMBT+ estimates user-acts as well as dialog belief states, and the LaRL models latent system action spaces and generates responses given the estimated contexts. We emphasize that the training framework of three steps significantly and stably increase dialog success rates: separately pretraining the SUMBT+ and LaRL, fine-tuning the entire system, and then reinforcement learning of dialog policy. We also introduce new reward criteria of reinforcement learning for dialog policy training. Then, we discuss experimental results depending on the reward criteria and different dialog evaluation methods. Consequently, our model achieved the new state-of-the-art success rate of 85.4% on corpus-based evaluation, and a comparable success rate of 81.40% on simulator-based evaluation provided by the DSTC8 challenge. To our best knowledge, our work is the first comprehensive study of a modularized E2E multi-domain dialog system that learning from each component to the entire dialog policy for task success.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hwaran Lee (31 papers)
  2. Seokhwan Jo (1 paper)
  3. Sangkeun Jung (9 papers)
  4. Tae-Yoon Kim (3 papers)
  5. Hyungjun Kim (18 papers)
Citations (5)