Schema Augmentation for Zero-Shot Domain Adaptation in Dialogue State Tracking (2411.00150v1)
Abstract: Zero-shot domain adaptation for dialogue state tracking (DST) remains a challenging problem in task-oriented dialogue (TOD) systems, where models must generalize to target domains unseen at training time. Current LLM approaches for zero-shot domain adaptation rely on prompting to introduce knowledge pertaining to the target domains. However, their efficacy strongly depends on prompt engineering, as well as the zero-shot ability of the underlying LLM. In this work, we devise a novel data augmentation approach, Schema Augmentation, that improves the zero-shot domain adaptation of LLMs through fine-tuning. Schema Augmentation is a simple but effective technique that enhances generalization by introducing variations of slot names within the schema provided in the prompt. Experiments on MultiWOZ and SpokenWOZ showed that the proposed approach resulted in a substantial improvement over the baseline, in some experiments achieving over a twofold accuracy gain over unseen domains while maintaining equal or superior performance over all domains.
- Christopher Richardson (8 papers)
- Roshan Sharma (24 papers)
- Neeraj Gaur (7 papers)
- Parisa Haghani (15 papers)
- Anirudh Sundar (8 papers)
- Bhuvana Ramabhadran (47 papers)