A Benchmarking Environment for Reinforcement Learning Based Task Oriented Dialogue Management (1711.11023v2)
Abstract: Dialogue assistants are rapidly becoming an indispensable daily aid. To avoid the significant effort needed to hand-craft the required dialogue flow, the Dialogue Management (DM) module can be cast as a continuous Markov Decision Process (MDP) and trained through Reinforcement Learning (RL). Several RL models have been investigated over recent years. However, the lack of a common benchmarking framework makes it difficult to perform a fair comparison between different models and their capability to generalise to different environments. Therefore, this paper proposes a set of challenging simulated environments for dialogue model development and evaluation. To provide some baselines, we investigate a number of representative parametric algorithms, namely deep reinforcement learning algorithms - DQN, A2C and Natural Actor-Critic and compare them to a non-parametric model, GP-SARSA. Both the environments and policy models are implemented using the publicly available PyDial toolkit and released on-line, in order to establish a testbed framework for further experiments and to facilitate experimental reproducibility.
- Iñigo Casanueva (18 papers)
- Paweł Budzianowski (27 papers)
- Pei-Hao Su (25 papers)
- Nikola Mrkšić (30 papers)
- Tsung-Hsien Wen (27 papers)
- Stefan Ultes (32 papers)
- Lina Rojas-Barahona (11 papers)
- Steve Young (30 papers)
- Milica Gašić (57 papers)