Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Neural Response Selection for Task-Oriented Dialogue Systems (1906.01543v2)

Published 4 Jun 2019 in cs.CL

Abstract: Despite their popularity in the chatbot literature, retrieval-based models have had modest impact on task-oriented dialogue systems, with the main obstacle to their application being the low-data regime of most task-oriented dialogue tasks. Inspired by the recent success of pretraining in LLMling, we propose an effective method for deploying response selection in task-oriented dialogue. To train response selection models for task-oriented dialogue tasks, we propose a novel method which: 1) pretrains the response selection model on large general-domain conversational corpora; and then 2) fine-tunes the pretrained model for the target dialogue domain, relying only on the small in-domain dataset to capture the nuances of the given dialogue domain. Our evaluation on six diverse application domains, ranging from e-commerce to banking, demonstrates the effectiveness of the proposed training method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Matthew Henderson (13 papers)
  2. Ivan Vulić (130 papers)
  3. Daniela Gerz (11 papers)
  4. Iñigo Casanueva (18 papers)
  5. Paweł Budzianowski (27 papers)
  6. Sam Coope (6 papers)
  7. Georgios Spithourakis (3 papers)
  8. Tsung-Hsien Wen (27 papers)
  9. Nikola Mrkšić (30 papers)
  10. Pei-Hao Su (25 papers)
Citations (107)