Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Zero-Shot Generalizable End-to-End Task-Oriented Dialog System Without Turn-level Dialog Annotations (2407.15055v2)

Published 21 Jul 2024 in cs.CL

Abstract: Task-oriented dialogue (TOD) systems enable users to achieve their goals through natural language interactions. Traditionally, these systems have relied on turn-level manually annotated metadata, such as dialogue states and policy annotations, which are expensive, time-consuming, and often inconsistent or error-prone. This dependence limits the potential to leverage vast amounts of readily available conversational data for training TOD systems. Additionally, a critical challenge in TOD system design is determining when and how to access and integrate information from external sources. Current approaches typically expect this information to be provided alongside the dialogue context, rather than learning to identify and retrieve it autonomously. While pre-trained LLMs have been used to develop TOD systems, their potential to train such systems without laborious annotations remains largely unexplored. This work employs multi-task instruction fine-tuning to create more efficient and scalable TOD systems that can effectively leverage natural language conversational data without manual annotations, while autonomously managing external information retrieval. Our extensive experimental evaluations, using three diverse TOD datasets and three LLMs of varying sizes, demonstrate that our approach can generalize to new, unseen domains. Notably, our approach outperforms both state-of-the-art models trained on annotated data and billion-scale parameter off-the-shelf ChatGPT models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Adib Mosharrof (7 papers)
  2. A. B. Siddique (20 papers)