Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Joint and Domain-Adaptive Approach to Spoken Language Understanding (2107.11768v1)

Published 25 Jul 2021 in cs.CL and cs.AI

Abstract: Spoken Language Understanding (SLU) is composed of two subtasks: intent detection (ID) and slot filling (SF). There are two lines of research on SLU. One jointly tackles these two subtasks to improve their prediction accuracy, and the other focuses on the domain-adaptation ability of one of the subtasks. In this paper, we attempt to bridge these two lines of research and propose a joint and domain adaptive approach to SLU. We formulate SLU as a constrained generation task and utilize a dynamic vocabulary based on domain-specific ontology. We conduct experiments on the ASMixed and MTOD datasets and achieve competitive performance with previous state-of-the-art joint models. Besides, results show that our joint model can be effectively adapted to a new domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Linhao Zhang (18 papers)
  2. Yu Shi (153 papers)
  3. Linjun Shou (53 papers)
  4. Ming Gong (246 papers)
  5. Houfeng Wang (43 papers)
  6. Michael Zeng (76 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.