Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Benchmarking Environment for Reinforcement Learning Based Task Oriented Dialogue Management (1711.11023v2)

Published 29 Nov 2017 in stat.ML, cs.CL, and cs.NE

Abstract: Dialogue assistants are rapidly becoming an indispensable daily aid. To avoid the significant effort needed to hand-craft the required dialogue flow, the Dialogue Management (DM) module can be cast as a continuous Markov Decision Process (MDP) and trained through Reinforcement Learning (RL). Several RL models have been investigated over recent years. However, the lack of a common benchmarking framework makes it difficult to perform a fair comparison between different models and their capability to generalise to different environments. Therefore, this paper proposes a set of challenging simulated environments for dialogue model development and evaluation. To provide some baselines, we investigate a number of representative parametric algorithms, namely deep reinforcement learning algorithms - DQN, A2C and Natural Actor-Critic and compare them to a non-parametric model, GP-SARSA. Both the environments and policy models are implemented using the publicly available PyDial toolkit and released on-line, in order to establish a testbed framework for further experiments and to facilitate experimental reproducibility.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Iñigo Casanueva (18 papers)
  2. Paweł Budzianowski (27 papers)
  3. Pei-Hao Su (25 papers)
  4. Nikola Mrkšić (30 papers)
  5. Tsung-Hsien Wen (27 papers)
  6. Stefan Ultes (32 papers)
  7. Lina Rojas-Barahona (11 papers)
  8. Steve Young (30 papers)
  9. Milica Gašić (57 papers)
Citations (53)