Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contextual Dynamic Prompting for Response Generation in Task-oriented Dialog Systems (2301.13268v2)

Published 30 Jan 2023 in cs.CL

Abstract: Response generation is one of the critical components in task-oriented dialog systems. Existing studies have shown that large pre-trained LLMs can be adapted to this task. The typical paradigm of adapting such extremely LLMs would be by fine-tuning on the downstream tasks which is not only time-consuming but also involves significant resources and access to fine-tuning data. Prompting (Schick and Sch\"utze, 2020) has been an alternative to fine-tuning in many NLP tasks. In our work, we explore the idea of using prompting for response generation in task-oriented dialog systems. Specifically, we propose an approach that performs contextual dynamic prompting where the prompts are learnt from dialog contexts. We aim to distill useful prompting signals from the dialog context. On experiments with MultiWOZ 2.2 dataset (Zang et al., 2020), we show that contextual dynamic prompts improve response generation in terms of combined score (Mehri et al., 2019) by 3 absolute points, and a massive 20 points when dialog states are incorporated. Furthermore, human annotation on these conversations found that agents which incorporate context were preferred over agents with vanilla prefix-tuning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sandesh Swamy (4 papers)
  2. Narges Tabari (5 papers)
  3. Chacha Chen (17 papers)
  4. Rashmi Gangadharaiah (12 papers)
Citations (5)