Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Teaching a Multilingual Large Language Model to Understand Multilingual Speech via Multi-Instructional Training (2404.10922v1)

Published 16 Apr 2024 in cs.CL, cs.SD, and eess.AS

Abstract: Recent advancements in LLMing have led to the emergence of LLMs capable of various natural language processing tasks. Despite their success in text-based tasks, applying LLMs to the speech domain remains limited and challenging. This paper presents BLOOMZMMS, a novel model that integrates a multilingual LLM with a multilingual speech encoder, aiming to harness the capabilities of LLMs for speech recognition and beyond. Utilizing a multi-instructional training approach, we demonstrate the transferability of linguistic knowledge from the text to the speech modality. Our experiments, conducted on 1900 hours of transcribed data from 139 languages, establish that a multilingual speech representation can be effectively learned and aligned with a multilingual LLM. While this learned representation initially shows limitations in task generalization, we address this issue by generating synthetic targets in a multi-instructional style. Our zero-shot evaluation results confirm the robustness of our approach across multiple tasks, including speech translation and multilingual spoken language understanding, thereby opening new avenues for applying LLMs in the speech domain.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Common voice: A massively-multilingual speech corpus. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4218–4222, Marseille, France. European Language Resources Association.
  2. SpeechGLUE: How Well Can Self-Supervised Speech Models Capture Linguistic Knowledge? Proc. Interspeech 2023.
  3. XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale. arXiv preprint arXiv:2111.09296.
  4. wav2vec 2.0: A framework for self-supervised learning of speech representations. In Advances in Neural Information Processing Systems, volume 33, pages 12449–12460.
  5. An Exploration of Self-Supervised Pretrained Representations for End-to-End Speech Recognition. In 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 228–235. IEEE.
  6. Improving massively multilingual ASR with auxiliary CTC objectives. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE.
  7. MAESTRO: Matched Speech Text Representations through Modality Matching. Proc. Interspeech 2022.
  8. FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech. In 2022 IEEE Spoken Language Technology Workshop (SLT), pages 798–805. IEEE.
  9. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics.
  10. LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale. In Advances in Neural Information Processing Systems, volume 35, pages 30318–30332.
  11. Prompting Large Language Models with Speech Recognition Abilities. arXiv preprint arXiv:2307.11795.
  12. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proc. of ICML.
  13. Daniel Jurafsky and James H. Martin. 2009. Speech and language processing.
  14. E-branchformer: Branchformer with enhanced merging for speech recognition. In 2022 IEEE Spoken Language Technology Workshop (SLT), pages 84–91. IEEE.
  15. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
  16. Audio augmentation for speech recognition. In Sixteenth annual conference of the international speech communication association.
  17. Multilingual speech translation from efficient finetuning of pretrained models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 827–838, Online. Association for Computational Linguistics.
  18. Prompting Large Language Models for Zero-Shot Domain Adaptation in Speech Recognition. arXiv preprint arXiv:2306.16007.
  19. Adapting Large Language Model with Speech for Fully Formatted End-to-End Speech Recognition. arXiv preprint arXiv:2307.08234.
  20. Low-resource multilingual and zero-shot multispeaker TTS. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 741–751, Online only. Association for Computational Linguistics.
  21. Crosslingual generalization through multitask finetuning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15991–16111, Toronto, Canada. Association for Computational Linguistics.
  22. Spoken question answering and speech continuation using spectrogram-powered llm. In The Twelfth International Conference on Learning Representations.
  23. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
  24. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186–191, Brussels, Belgium. Association for Computational Linguistics.
  25. Scaling Speech Technology to 1,000+ Languages. arXiv preprint arXiv:2305.13516.
  26. MLS: A Large-Scale Multilingual Dataset for Speech Research. Proc. Interspeech 2020.
  27. Robust Speech Recognition via Large-Scale Weak Supervision. In International Conference on Machine Learning, pages 28492–28518. PMLR.
  28. Language models are unsupervised multitask learners.
  29. Multitask Prompted Training Enables Zero-Shot Task Generalization. In ICLR 2022-Tenth International Conference on Learning Representations.
  30. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model.
  31. VoxPopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 993–1003, Online. Association for Computational Linguistics.
  32. CoVoST 2 and Massively Multilingual Speech Translation. Proc. Interspeech 2021, pages 2247–2251.
  33. The 2020 espnet update: new features, broadened applications, performance improvements, and future plans. In 2021 IEEE Data Science and Learning Workshop (DSLW), pages 1–6. IEEE.
  34. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
  35. On decoder-only architecture for speech-to-text and large language model integration. arXiv preprint arXiv:2307.03917.
  36. SUPERB: Speech processing Universal PERformance Benchmark. Proc. Interspeech 2021, pages 1194–1198.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Pavel Denisov (19 papers)
  2. Ngoc Thang Vu (93 papers)
Citations (1)