Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain-independent User Simulation with Transformers for Task-oriented Dialogue Systems (2106.08838v1)

Published 16 Jun 2021 in cs.CL

Abstract: Dialogue policy optimisation via reinforcement learning requires a large number of training interactions, which makes learning with real users time consuming and expensive. Many set-ups therefore rely on a user simulator instead of humans. These user simulators have their own problems. While hand-coded, rule-based user simulators have been shown to be sufficient in small, simple domains, for complex domains the number of rules quickly becomes intractable. State-of-the-art data-driven user simulators, on the other hand, are still domain-dependent. This means that adaptation to each new domain requires redesigning and retraining. In this work, we propose a domain-independent transformer-based user simulator (TUS). The structure of our TUS is not tied to a specific domain, enabling domain generalisation and learning of cross-domain user behaviour from data. We compare TUS with the state of the art using automatic as well as human evaluations. TUS can compete with rule-based user simulators on pre-defined domains and is able to generalise to unseen domains in a zero-shot fashion.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hsien-chin Lin (22 papers)
  2. Nurul Lubis (21 papers)
  3. Songbo Hu (9 papers)
  4. Carel van Niekerk (23 papers)
  5. Christian Geishauser (19 papers)
  6. Michael Heck (23 papers)
  7. Shutong Feng (19 papers)
  8. Milica Gašić (57 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.