Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Schema Augmentation for Zero-Shot Domain Adaptation in Dialogue State Tracking (2411.00150v1)

Published 31 Oct 2024 in cs.CL and cs.AI

Abstract: Zero-shot domain adaptation for dialogue state tracking (DST) remains a challenging problem in task-oriented dialogue (TOD) systems, where models must generalize to target domains unseen at training time. Current LLM approaches for zero-shot domain adaptation rely on prompting to introduce knowledge pertaining to the target domains. However, their efficacy strongly depends on prompt engineering, as well as the zero-shot ability of the underlying LLM. In this work, we devise a novel data augmentation approach, Schema Augmentation, that improves the zero-shot domain adaptation of LLMs through fine-tuning. Schema Augmentation is a simple but effective technique that enhances generalization by introducing variations of slot names within the schema provided in the prompt. Experiments on MultiWOZ and SpokenWOZ showed that the proposed approach resulted in a substantial improvement over the baseline, in some experiments achieving over a twofold accuracy gain over unseen domains while maintaining equal or superior performance over all domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Christopher Richardson (8 papers)
  2. Roshan Sharma (24 papers)
  3. Neeraj Gaur (7 papers)
  4. Parisa Haghani (15 papers)
  5. Anirudh Sundar (8 papers)
  6. Bhuvana Ramabhadran (47 papers)