Adaptable Multi-Domain Language Model for Transformer ASR (2008.06208v2)
Abstract: We propose an adapter based multi-domain Transformer based LLM (LM) for Transformer ASR. The model consists of a big size common LM and small size adapters. The model can perform multi-domain adaptation with only the small size adapters and its related layers. The proposed model can reuse the full fine-tuned LM which is fine-tuned using all layers of an original model. The proposed LM can be expanded to new domains by adding about 2% of parameters for a first domain and 13% parameters for after second domain. The proposed model is also effective in reducing the model maintenance cost because it is possible to omit the costly and time-consuming common LM pre-training process. Using proposed adapter based approach, we observed that a general LM with adapter can outperform a dedicated music domain LM in terms of word error rate (WER).
- Taewoo Lee (21 papers)
- Min-Joong Lee (1 paper)
- Tae Gyoon Kang (1 paper)
- Seokyeoung Jung (1 paper)
- Minseok Kwon (4 papers)
- Yeona Hong (3 papers)
- Jungin Lee (26 papers)
- Kyoung-Gu Woo (2 papers)
- Ho-Gyeong Kim (2 papers)
- Jiseung Jeong (1 paper)
- Jihyun Lee (25 papers)
- Hosik Lee (4 papers)
- Young Sang Choi (6 papers)