Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Learn End-to-End Goal-Oriented Dialog From Related Dialog Tasks (2110.15724v1)

Published 10 Oct 2021 in cs.CL and cs.LG

Abstract: For each goal-oriented dialog task of interest, large amounts of data need to be collected for end-to-end learning of a neural dialog system. Collecting that data is a costly and time-consuming process. Instead, we show that we can use only a small amount of data, supplemented with data from a related dialog task. Naively learning from related data fails to improve performance as the related data can be inconsistent with the target task. We describe a meta-learning based method that selectively learns from the related dialog task data. Our approach leads to significant accuracy improvements in an example dialog task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Janarthanan Rajendran (26 papers)
  2. Jonathan K. Kummerfeld (38 papers)
  3. Satinder Singh (80 papers)