Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Code-switching Speech Recognition with Interactive Language Biases (2309.16953v1)

Published 29 Sep 2023 in eess.AS and cs.SD

Abstract: Languages usually switch within a multilingual speech signal, especially in a bilingual society. This phenomenon is referred to as code-switching (CS), making automatic speech recognition (ASR) challenging under a multilingual scenario. We propose to improve CS-ASR by biasing the hybrid CTC/attention ASR model with multi-level language information comprising frame- and token-level language posteriors. The interaction between various resolutions of language biases is subsequently explored in this work. We conducted experiments on datasets from the ASRU 2019 code-switching challenge. Compared to the baseline, the proposed interactive language biases (ILB) method achieves higher performance and ablation studies highlight the effects of different language biases and their interactions. In addition, the results presented indicate that language bias implicitly enhances internal LLMing, leading to performance degradation after employing an external LLM.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, J. Silovsky, G. Stemmer, and K. Vesely, “The Kaldi speech recognition toolkit,” in Proc. IEEE Workshop Autom. Speech Recognit. Understanding, 2011.
  2. A. Gulati, J. Qin, C.-C. Chiu, N. Parmar, Y. Zhang, J. Yu, W. Han, S. Wang, Z. Zhang, Y. Wu et al., “Conformer: Convolution-augmented transformer for speech recognition,” in Proc. Interspeech, 2020.
  3. N. T. Vu, D.-C. Lyu, J. Weiner, D. Telaar, T. Schlippe, F. Blaicher, E.-S. Chng, T. Schultz, and H. Li, “A first speech recognition system for mandarin-english code-switch conversational speech,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2012, pp. 4889–4892.
  4. Z. Zeng, Y. Khassanov, V. T. Pham, H. Xu, E. S. Chng, and H. Li, “On the end-to-end solution to Mandarin-English code-switching speech recognition,” in Proc. Interspeech, 2019, pp. 2165–2169.
  5. H. Liu, L. P. G. Perera, X. Zhang, J. Dauwels, A. W. H. Khong, S. Khudanpur, and S. J. Styles, “End-to-end language diarization for bilingual code-switching speech,” in Proc. Interspeech, 2021, pp. 1489–1493.
  6. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 5998–6008.
  7. Y. Lu, M. Huang, H. Li, J. Guo, and Y. Qian, “Bi-encoder transformer network for Mandarin-English code-switching speech recognition using mixture of experts,” in Proc. Interspeech, 2020, pp. 4766–4770.
  8. M. S. Mary N J, V. M. Shetty, and S. Umesh, “Investigation of methods to improve the recognition performance of Tamil-English code-switched data in transformer framework,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2020, pp. 7889–7893.
  9. S. Dalmia, Y. Liu, S. Ronanki, and K. Kirchhoff, “Transformer-transducers for code-switched speech recognition,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2021, pp. 5859–5863.
  10. T. Song, Q. Xu, M. Ge, L. Wang, H. Shi, Y. Lv, Y. Lin, and J. Dang, “Language-specific characteristic assistance for code-switching speech recognition,” in Proc. Interspeech, 2022, pp. 3924–3928.
  11. L. Dong, S. Xu, and B. Xu, “Speech-transformer: A no-recurrence sequence-to-sequence model for speech recognition,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2018, pp. 5884–5888.
  12. S. Zhang, J. Yi, Z. Tian, J. Tao, Y. T. Yeung, and L. Deng, “Reducing multilingual context confusion for end-to-end code-switching automatic speech recognition,” in Proc. Interspeech, 2022, pp. 3894–3898.
  13. B. Yan, C. Zhang, M. Yu, S.-X. Zhang, S. Dalmia, D. Berrebbi, C. Weng, S. Watanabe, and D. Yu, “Joint modeling of code-switched and monolingual asr via conditional factorization,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2022, pp. 6412–6416.
  14. H. Liu, H. Xu, L. P. Garcia, A. W. H. Khong, Y. He, and S. Khudanpur, “Reducing language confusion for code-switching speech recognition with token-level language diarization,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2023, pp. 1–5.
  15. S. Watanabe, T. Hori, S. Kim, J. R. Hershey, and T. Hayashi, “Hybrid CTC/attention architecture for end-to-end speech recognition,” IEEE J. Sel. Topics Signal Process., vol. 11, no. 8, pp. 1240–1253, 2017.
  16. A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,” in Proc. Int. Conf. Mach. Learn., 2006, pp. 369–376.
  17. H. Liu, L. P. G. Perera, A. W. H. Khong, S. J. Styles, and S. Khudanpur, “PHO-LID: A unified model incorporating acoustic-phonetic and phonotactic information for language identification,” in Proc. Interspeech, 2022, pp. 2233–2237.
  18. S. O. Sadjadi, T. Kheyrkhah, C. S. Greenberg, E. Singer, D. A. Reynolds, L. P. Mason, and J. Hernandez-Cordero, “Performance analysis of the 2017 NIST language recognition evaluation,” in Proc. Interspeech, 2018, pp. 1798–1802.
  19. L.-H. Tseng, Y.-K. Fu, H.-J. Chang, and H.-y. Lee, “Mandarin-English code-switching speech recognition with self-supervised speech representation models,” arXiv preprint arXiv:2110.03504, 2021.
  20. X. Shi, Q. Feng, and L. Xie, “The ASRU 2019 Mandarin-English code-switching speech recognition challenge: Open datasets, tracks, methods and results,” arXiv preprint arXiv:2007.05916, 2020.
  21. S. Watanabe, T. Hori, S. Karita, T. Hayashi, J. Nishitoba, Y. Unno, N. Enrique Yalta Soplin, J. Heymann, M. Wiesner, N. Chen, A. Renduchintala, and T. Ochiai, “ESPnet: End-to-end speech processing toolkit,” in Proc. Interspeech, 2018, pp. 2207–2211.
  22. D. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, “SpecAugment: A simple data augmentation method for automatic speech recognition,” in Proc. Interspeech, 2019, pp. 2613–2617.
  23. S. Karita, N. E. Y. Soplin, S. Watanabe, M. Delcroix, A. Ogawa, and T. Nakatani, “Improving transformer-based end-to-end speech recognition with connectionist temporal classification and language model integration,” in Proc. Interspeech, 2019, pp. 1408–1412.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hexin Liu (35 papers)
  2. Leibny Paola Garcia (14 papers)
  3. Xiangyu Zhang (328 papers)
  4. Andy W. H. Khong (12 papers)
  5. Sanjeev Khudanpur (74 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.