Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Zero-Shot Dialogue State Tracking via Cross-Task Transfer (2109.04655v1)

Published 10 Sep 2021 in cs.CL

Abstract: Zero-shot transfer learning for dialogue state tracking (DST) enables us to handle a variety of task-oriented dialogue domains without the expense of collecting in-domain data. In this work, we propose to transfer the \textit{cross-task} knowledge from general question answering (QA) corpora for the zero-shot DST task. Specifically, we propose TransferQA, a transferable generative QA model that seamlessly combines extractive QA and multi-choice QA via a text-to-text transformer framework, and tracks both categorical slots and non-categorical slots in DST. In addition, we introduce two effective ways to construct unanswerable questions, namely, negative question sampling and context truncation, which enable our model to handle "none" value slots in the zero-shot DST setting. The extensive experiments show that our approaches substantially improve the existing zero-shot and few-shot results on MultiWoz. Moreover, compared to the fully trained baseline on the Schema-Guided Dialogue dataset, our approach shows better generalization ability in unseen domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Zhaojiang Lin (45 papers)
  2. Bing Liu (211 papers)
  3. Andrea Madotto (64 papers)
  4. Seungwhan Moon (28 papers)
  5. Paul Crook (10 papers)
  6. Zhenpeng Zhou (7 papers)
  7. Zhiguang Wang (24 papers)
  8. Zhou Yu (206 papers)
  9. Eunjoon Cho (6 papers)
  10. Rajen Subba (8 papers)
  11. Pascale Fung (150 papers)
Citations (72)