Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Knowledge Bases with Parameters for Task-Oriented Dialogue Systems (2009.13656v1)

Published 28 Sep 2020 in cs.CL and cs.AI

Abstract: Task-oriented dialogue systems are either modularized with separate dialogue state tracking (DST) and management steps or end-to-end trainable. In either case, the knowledge base (KB) plays an essential role in fulfilling user requests. Modularized systems rely on DST to interact with the KB, which is expensive in terms of annotation and inference time. End-to-end systems use the KB directly as input, but they cannot scale when the KB is larger than a few hundred entries. In this paper, we propose a method to embed the KB, of any size, directly into the model parameters. The resulting model does not require any DST or template responses, nor the KB as input, and it can dynamically update its KB via fine-tuning. We evaluate our solution in five task-oriented dialogue datasets with small, medium, and large KB size. Our experiments show that end-to-end models can effectively embed knowledge bases in their parameters and achieve competitive performance in all evaluated datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Andrea Madotto (64 papers)
  2. Samuel Cahyawijaya (75 papers)
  3. Genta Indra Winata (94 papers)
  4. Yan Xu (258 papers)
  5. Zihan Liu (102 papers)
  6. Zhaojiang Lin (45 papers)
  7. Pascale Fung (150 papers)
Citations (59)