Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Does The User Want? Information Gain for Hierarchical Dialogue Policy Optimisation (2109.07129v1)

Published 15 Sep 2021 in cs.LG and cs.CL

Abstract: The dialogue management component of a task-oriented dialogue system is typically optimised via reinforcement learning (RL). Optimisation via RL is highly susceptible to sample inefficiency and instability. The hierarchical approach called Feudal Dialogue Management takes a step towards more efficient learning by decomposing the action space. However, it still suffers from instability due to the reward only being provided at the end of the dialogue. We propose the usage of an intrinsic reward based on information gain to address this issue. Our proposed reward favours actions that resolve uncertainty or query the user whenever necessary. It enables the policy to learn how to retrieve the users' needs efficiently, which is an integral aspect in every task-oriented conversation. Our algorithm, which we call FeudalGain, achieves state-of-the-art results in most environments of the PyDial framework, outperforming much more complex approaches. We confirm the sample efficiency and stability of our algorithm through experiments in simulation and a human trial.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Christian Geishauser (19 papers)
  2. Songbo Hu (9 papers)
  3. Hsien-chin Lin (22 papers)
  4. Nurul Lubis (21 papers)
  5. Michael Heck (23 papers)
  6. Shutong Feng (19 papers)
  7. Carel van Niekerk (23 papers)
  8. Milica Gašić (57 papers)
Citations (3)