Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain Transfer in Dialogue Systems without Turn-Level Supervision (1909.07101v1)

Published 16 Sep 2019 in cs.CL

Abstract: Task oriented dialogue systems rely heavily on specialized dialogue state tracking (DST) modules for dynamically predicting user intent throughout the conversation. State-of-the-art DST models are typically trained in a supervised manner from manual annotations at the turn level. However, these annotations are costly to obtain, which makes it difficult to create accurate dialogue systems for new domains. To address these limitations, we propose a method, based on reinforcement learning, for transferring DST models to new domains without turn-level supervision. Across several domains, our experiments show that this method quickly adapts off-the-shelf models to new domains and performs on par with models trained with turn-level supervision. We also show our method can improve models trained using turn-level supervision by subsequent fine-tuning optimization toward dialog-level rewards.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Joachim Bingel (4 papers)
  2. Victor Petrén Bach Hansen (4 papers)
  3. Ana Valeria Gonzalez (7 papers)
  4. Paweł Budzianowski (27 papers)
  5. Isabelle Augenstein (131 papers)
  6. Anders Søgaard (120 papers)
Citations (2)