Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Feudal Reinforcement Learning for Dialogue Management in Large Domains (1803.03232v1)

Published 8 Mar 2018 in cs.CL, cs.AI, and cs.NE

Abstract: Reinforcement learning (RL) is a promising approach to solve dialogue policy optimisation. Traditional RL algorithms, however, fail to scale to large domains due to the curse of dimensionality. We propose a novel Dialogue Management architecture, based on Feudal RL, which decomposes the decision into two steps; a first step where a master policy selects a subset of primitive actions, and a second step where a primitive action is chosen from the selected subset. The structural information included in the domain ontology is used to abstract the dialogue state space, taking the decisions at each step using different parts of the abstracted state. This, combined with an information sharing mechanism between slots, increases the scalability to large domains. We show that an implementation of this approach, based on Deep-Q Networks, significantly outperforms previous state of the art in several dialogue domains and environments, without the need of any additional reward signal.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Iñigo Casanueva (18 papers)
  2. Paweł Budzianowski (27 papers)
  3. Pei-Hao Su (25 papers)
  4. Stefan Ultes (32 papers)
  5. Lina Rojas-Barahona (11 papers)
  6. Bo-Hsiang Tseng (20 papers)
  7. Milica Gašić (57 papers)
Citations (48)