Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptable Multi-Domain Language Model for Transformer ASR (2008.06208v2)

Published 14 Aug 2020 in eess.AS, cs.CL, and cs.SD

Abstract: We propose an adapter based multi-domain Transformer based LLM (LM) for Transformer ASR. The model consists of a big size common LM and small size adapters. The model can perform multi-domain adaptation with only the small size adapters and its related layers. The proposed model can reuse the full fine-tuned LM which is fine-tuned using all layers of an original model. The proposed LM can be expanded to new domains by adding about 2% of parameters for a first domain and 13% parameters for after second domain. The proposed model is also effective in reducing the model maintenance cost because it is possible to omit the costly and time-consuming common LM pre-training process. Using proposed adapter based approach, we observed that a general LM with adapter can outperform a dedicated music domain LM in terms of word error rate (WER).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Taewoo Lee (21 papers)
  2. Min-Joong Lee (1 paper)
  3. Tae Gyoon Kang (1 paper)
  4. Seokyeoung Jung (1 paper)
  5. Minseok Kwon (4 papers)
  6. Yeona Hong (3 papers)
  7. Jungin Lee (26 papers)
  8. Kyoung-Gu Woo (2 papers)
  9. Ho-Gyeong Kim (2 papers)
  10. Jiseung Jeong (1 paper)
  11. Jihyun Lee (25 papers)
  12. Hosik Lee (4 papers)
  13. Young Sang Choi (6 papers)
Citations (16)