Aligning Speech to Languages to Enhance Code-switching Speech Recognition (2403.05887v1)
Abstract: Code-switching (CS) refers to the switching of languages within a speech signal and results in language confusion for automatic speech recognition (ASR). To address language confusion, we propose the language alignment loss that performs frame-level language identification using pseudo language labels learned from the ASR decoder. This eliminates the need for frame-level language annotations. To further tackle the complex token alternatives for LLMing in bilingual scenarios, we propose to employ LLMs via a generative error correction method. A linguistic hint that incorporates language information (derived from the proposed language alignment loss and decoded hypotheses) is introduced to guide the prompting of LLMs. The proposed methods are evaluated on the SEAME dataset and data from the ASRU 2019 Mandarin-English code-switching speech recognition challenge. The incorporation of the proposed language alignment loss demonstrates a higher CS-ASR performance with only a negligible increase in the number of parameters on both datasets compared to the baseline model. This work also highlights the efficacy of language alignment loss in balancing primary-language-dominant bilingual data during training, with an 8.6% relative improvement on the ASRU dataset compared to the baseline model. Performance evaluation using LLMs reveals the advantage of the linguistic hint by achieving 14.1% and 5.5% relative improvement on test sets of the ASRU and SEAME datasets, respectively.
- R. Prabhavalkar, T. Hori, T. N. Sainath, R. Schlüter, and S. Watanabe, “End-to-end speech recognition: A survey,” IEEE/ACM Trans. Audio, Speech Lang. Process., vol. 32, pp. 325–351, 2024.
- S. Watanabe, T. Hori, S. Karita, T. Hayashi, J. Nishitoba, Y. Unno, N. Enrique Yalta Soplin, J. Heymann, M. Wiesner, N. Chen, A. Renduchintala, and T. Ochiai, “ESPnet: End-to-end speech processing toolkit,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2018, pp. 2207–2211.
- Z. Zeng, Y. Khassanov, V. T. Pham, H. Xu, E. S. Chng, and H. Li, “On the end-to-end solution to Mandarin-English code-switching speech recognition,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2019, pp. 2165–2169.
- H. Liu, H. Xu, L. P. Garcia, A. W. H. Khong, Y. He, and S. Khudanpur, “Reducing language confusion for code-switching speech recognition with token-level language diarization,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2023, pp. 1–5.
- L.-H. Tseng, Y.-K. Fu, H.-J. Chang, and H.-y. Lee, “Mandarin-English code-switching speech recognition with self-supervised speech representation models,” arXiv preprint arXiv:2110.03504, 2021.
- H. Liu, L. P. Garcia, X. Zhang, J. Dauwels, A. W. H. Khong, S. Khudanpur, and S. J. Styles, “End-to-end language diarization for bilingual code-switching speech,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2021, pp. 1489–1493.
- H. Liu, L. P. Garcia, X. Zhang, A. W. Khong, and S. Khudanpur, “Enhancing code-switching speech recognition with interactive language biases,” arXiv preprint arXiv:2309.16953, 2023.
- H. Liu, L. P. Garcia, A. W. H. Khong, S. J. Styles, and S. Khudanpur, “PHO-LID: A unified model incorporating acoustic-phonetic and phonotactic information for language identification,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2022, pp. 2233–2237.
- Y. Lu, M. Huang, H. Li, J. Guo, and Y. Qian, “Bi-encoder transformer network for Mandarin-English code-switching speech recognition using mixture of experts,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2020, pp. 4766–4770.
- M. S. Mary N J, V. M. Shetty, and S. Umesh, “Investigation of methods to improve the recognition performance of tamil-english code-switched data in transformer framework,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2020, pp. 7889–7893.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 5998–6008.
- L. Dong, S. Xu, and B. Xu, “Speech-transformer: A no-recurrence sequence-to-sequence model for speech recognition,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2018, pp. 5884–5888.
- S. Dalmia, Y. Liu, S. Ronanki, and K. Kirchhoff, “Transformer-transducers for code-switched speech recognition,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2021, pp. 5859–5863.
- T. Song, Q. Xu, M. Ge, L. Wang, H. Shi, Y. Lv, Y. Lin, and J. Dang, “Language-specific characteristic assistance for code-switching speech recognition,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2022, pp. 3924–3928.
- S. Zhang, J. Yi, Z. Tian, J. Tao, Y. T. Yeung, and L. Deng, “Reducing multilingual context confusion for end-to-end code-switching automatic speech recognition,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2022, pp. 3894–3898.
- B. Yan, C. Zhang, M. Yu, S.-X. Zhang, S. Dalmia, D. Berrebbi, C. Weng, S. Watanabe, and D. Yu, “Joint modeling of code-switched and monolingual asr via conditional factorization,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2022, pp. 6412–6416.
- B. Yan, M. Wiesner, O. Klejch, P. Jyothi, and S. Watanabe, “Towards zero-shot code-switched speech recognition,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2023, pp. 1–5.
- A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,” in Proc. Int. Conf. Mach. Learn., 2006, pp. 369–376.
- H. Adel, N. T. Vu, K. Kirchhoff, D. Telaar, and T. Schultz, “Syntactic and semantic features for code-switching factored language models,” IEEE/ACM Trans. Audio, Speech Lang. Process., vol. 23, no. 3, pp. 431–440, 2015.
- D.-C. Lyu, T.-P. Tan, E. S. Chng, and H. Li, “SEAME: a Mandarin-English code-switching speech corpus in South-East Asia,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2010, pp. 1986–1989.
- X. Shi, Q. Feng, and L. Xie, “The ASRU 2019 Mandarin-English code-switching speech recognition challenge: Open datasets, tracks, methods and results,” arXiv preprint arXiv:2007.05916, 2020.
- R. Wanneroy, E. Bilinski, C. Barras, M. Adda-Decker, and E. Geoffrois, “Acoustic-phonetic modeling of non-native speech for language identification,” in Multilingual Interoperability in Speech Technology, 1999.
- L. Zhou, J. Li, E. Sun, and S. Liu, “A configurable multilingual model is all you need to recognize all languages,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2022, pp. 6422–6426.
- C. Zhang, B. Li, T. N. Sainath, T. Strohman, and S.-Y. Chang, “UML: A universal monolingual output layer for multilingual asr,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2023, pp. 1–5.
- H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, “LLaMA: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
- C.-Y. Hsieh, C.-L. Li, C.-k. Yeh, H. Nakhost, Y. Fujii, A. Ratner, R. Krishna, C.-Y. Lee, and T. Pfister, “Distilling step-by-step! Outperforming larger language models with less training data and smaller model sizes,” in Findings of the Association for Computational Linguistics: ACL 2023, Jul. 2023, pp. 8003–8017.
- C. Chen, Y. Hu, C.-H. H. Yang, S. M. Siniscalchi, P.-Y. Chen, and E. S. Chng, “Hyporadise: An open baseline for generative speech recognition with large language models,” arXiv preprint arXiv:2309.15701, 2023.
- S. Watanabe, T. Hori, S. Kim, J. R. Hershey, and T. Hayashi, “Hybrid CTC/attention architecture for end-to-end speech recognition,” IEEE J. Sel. Topics Signal Process., vol. 11, no. 8, pp. 1240–1253, 2017.
- A. Gulati, J. Qin, C.-C. Chiu, N. Parmar, Y. Zhang, J. Yu, W. Han, S. Wang, Z. Zhang, Y. Wu et al., “Conformer: Convolution-augmented transformer for speech recognition,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2020, pp. 5036–5040.
- C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE/CVF Conf. Comput. Vis. and Pattern Recognit., 2016, pp. 2818–2826.
- Y. Peng, Y. Liu, J. Zhang, H. Xu, Y. He, H. Huang, and E. S. Chng, “Internal language model estimation based language model fusion for cross-domain code-switching speech recognition,” arXiv preprint arXiv:2207.04176, 2022.
- A. Hussein, S. A. Chowdhury, A. Abdelali, N. Dehak, A. Ali, and S. Khudanpur, “Textual data augmentation for Arabic-English code-switching speech recognition,” in Proc. IEEE Spoken Lang. Technol. Workshop, 2023, pp. 777–784.
- H. Yu, Y. Hu, Y. Qian, M. Jin, L. Liu, S. Liu, Y. Shi, Y. Qian, E. Lin, and M. Zeng, “Code-switching text generation and injection in Mandarin-English ASR,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2023, pp. 1–5.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” in Proc. Adv. Neural Inf. Process. Syst., vol. 33, 2020, pp. 1877–1901.
- A. Radford, J. W. Kim, T. Xu, G. Brockman, C. Mcleavey, and I. Sutskever, “Robust speech recognition via large-scale weak supervision,” in Proc. Int. Conf. Machine Learn., 2023, pp. 28 492–28 518.
- Y. Fathullah, C. Wu, E. Lakomkin, J. Jia, Y. Shangguan, K. Li, J. Guo, W. Xiong, J. Mahadeokar, O. Kalinli, C. Fuegen, and M. Seltzer, “Prompting large language models with speech recognition abilities,” arXiv preprint arXiv:2307.11795, 2023.
- Y. Wang, Y. Wang, K. Dang, J. Liu, and Z. Liu, “A comprehensive survey of grammatical error correction,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 12, no. 5, pp. 1–51, 2021.
- E. J. Hu, yelong shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “LoRA: Low-rank adaptation of large language models,” in Proc. Int. Conf. Learn. Representations, 2022.
- S. Punjabi, H. Arsikere, Z. Raeesy, C. Chandak, N. Bhave, A. Bansal, M. Müller, S. Murillo, A. Rastrow, A. Stolcke, J. Droppo, S. Garimella, R. Maas, M. Hans, A. Mouchtaris, and S. Kunzmann, “Joint ASR and language identification using RNN-T: An efficient approach to dynamic language switching,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2021, pp. 7218–7222.
- Q. Wang and H. Li, “Text-derived language identity incorporation for end-to-end code-switching speech recognition,” in Proc. Conf. on Empirical Methods in Natural Language Processing, Sixth Workshop on Computational Approaches to Linguistic Code-Switching, 2023.
- J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou et al., “Chain-of-thought prompting elicits reasoning in large language models,” in Proc. Adv. Neural Inf. Process. Syst., vol. 35, 2022, pp. 24 824–24 837.
- L. Q. Dixon, “Bilingual education policy in Singapore: An analysis of its sociohistorical roots and current academic outcomes,” International Journal of Bilingual Education and Bilingualism, vol. 8, no. 1, pp. 25–47, 2005.
- T. Ko, V. Peddinti, D. Povey, and S. Khudanpur, “Audio augmentation for speech recognition,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2015, pp. 3586–3589.
- D. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, “SpecAugment: A simple data augmentation method for automatic speech recognition,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2019, pp. 2613–2617.
- P. Ramachandran, B. Zoph, and Q. V. Le, “Searching for activation functions,” arXiv preprint arXiv:1710.05941, 2017.
- K. Praveen, B. Radhakrishnan, K. Sabu, A. Pandey, and M. A. B. Shaik, “Language identification networks for multilingual everyday recordings,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2023, pp. 4124–4128.
- A. Zeyer, R. Schlüter, and H. Ney, “Why does CTC result in peaky behavior?” arXiv preprint arXiv:2105.14849, 2021.
- X. Chen, Y. Y. Lin, K. Wang, Y. He, and Z. Ma, “Improving frame-level classifier for word timings with non-peaky CTC in end-to-end automatic speech recognition,” in Proc. ISCA Annu. Conf. Int. Speech Commun. Assoc., 2023, pp. 2908–2912.
- Hexin Liu (35 papers)
- Xiangyu Zhang (328 papers)
- Leibny Paola Garcia (14 papers)
- Andy W. H. Khong (12 papers)
- Eng Siong Chng (112 papers)
- Shinji Watanabe (416 papers)