Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Longer-range Dialogue State Tracking (2103.00109v2)

Published 27 Feb 2021 in cs.CL, cs.AI, and cs.LG

Abstract: Dialogue state tracking (DST) is a pivotal component in task-oriented dialogue systems. While it is relatively easy for a DST model to capture belief states in short conversations, the task of DST becomes more challenging as the length of a dialogue increases due to the injection of more distracting contexts. In this paper, we aim to improve the overall performance of DST with a special focus on handling longer dialogues. We tackle this problem from three perspectives: 1) A model designed to enable hierarchical slot status prediction; 2) Balanced training procedure for generic and task-specific language understanding; 3) Data perturbation which enhances the model's ability in handling longer conversations. We conduct experiments on the MultiWOZ benchmark, and demonstrate the effectiveness of each component via a set of ablation tests, especially on longer conversations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ye Zhang (137 papers)
  2. Yuan Cao (201 papers)
  3. Mahdis Mahdieh (5 papers)
  4. Jeffrey Zhao (12 papers)
  5. Yonghui Wu (115 papers)
Citations (4)