Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Dialogue Policy for Continual Reinforcement Learning (2204.05928v2)

Published 12 Apr 2022 in cs.CL and cs.LG

Abstract: Continual learning is one of the key components of human learning and a necessary requirement of artificial intelligence. As dialogue can potentially span infinitely many topics and tasks, a task-oriented dialogue system must have the capability to continually learn, dynamically adapting to new challenges while preserving the knowledge it already acquired. Despite the importance, continual reinforcement learning of the dialogue policy has remained largely unaddressed. The lack of a framework with training protocols, baseline models and suitable metrics, has so far hindered research in this direction. In this work we fill precisely this gap, enabling research in dialogue policy optimisation to go from static to dynamic learning. We provide a continual learning algorithm, baseline architectures and metrics for assessing continual learning models. Moreover, we propose the dynamic dialogue policy transformer (DDPT), a novel dynamic architecture that can integrate new knowledge seamlessly, is capable of handling large state spaces and obtains significant zero-shot performance when being exposed to unseen domains, without any growth in network parameter size.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Christian Geishauser (19 papers)
  2. Carel van Niekerk (23 papers)
  3. Nurul Lubis (21 papers)
  4. Michael Heck (23 papers)
  5. Shutong Feng (19 papers)
  6. Milica Gašić (57 papers)
  7. Hsien-chin Lin (22 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.