Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware Parameterization (2004.14871v2)

Published 30 Apr 2020 in cs.CL

Abstract: Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain. However, annotating data for each domain is both financially costly and non-scalable so we should fully utilize information across all domains. One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains. We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters to improve knowledge learning and transfer. Experiments on 5 domains show that our model is more effective for multi-domain SLU and obtain the best results. In addition, we show its transferability by outperforming the prior best model by 12.4\% when adapting to a new domain with little data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Libo Qin (77 papers)
  2. Minheng Ni (18 papers)
  3. Yue Zhang (620 papers)
  4. Wanxiang Che (152 papers)
  5. Yangming Li (32 papers)
  6. Ting Liu (329 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.