Training Neural Response Selection for Task-Oriented Dialogue Systems (1906.01543v2)
Abstract: Despite their popularity in the chatbot literature, retrieval-based models have had modest impact on task-oriented dialogue systems, with the main obstacle to their application being the low-data regime of most task-oriented dialogue tasks. Inspired by the recent success of pretraining in LLMling, we propose an effective method for deploying response selection in task-oriented dialogue. To train response selection models for task-oriented dialogue tasks, we propose a novel method which: 1) pretrains the response selection model on large general-domain conversational corpora; and then 2) fine-tunes the pretrained model for the target dialogue domain, relying only on the small in-domain dataset to capture the nuances of the given dialogue domain. Our evaluation on six diverse application domains, ranging from e-commerce to banking, demonstrates the effectiveness of the proposed training method.
- Matthew Henderson (13 papers)
- Ivan Vulić (130 papers)
- Daniela Gerz (11 papers)
- Iñigo Casanueva (18 papers)
- Paweł Budzianowski (27 papers)
- Sam Coope (6 papers)
- Georgios Spithourakis (3 papers)
- Tsung-Hsien Wen (27 papers)
- Nikola Mrkšić (30 papers)
- Pei-Hao Su (25 papers)