Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analysis of Multilingual Sequence-to-Sequence speech recognition systems (1811.03451v1)

Published 7 Nov 2018 in eess.AS, cs.CL, and cs.LG

Abstract: This paper investigates the applications of various multilingual approaches developed in conventional hidden Markov model (HMM) systems to sequence-to-sequence (seq2seq) automatic speech recognition (ASR). On a set composed of Babel data, we first show the effectiveness of multi-lingual training with stacked bottle-neck (SBN) features. Then we explore various architectures and training strategies of multi-lingual seq2seq models based on CTC-attention networks including combinations of output layer, CTC and/or attention component re-training. We also investigate the effectiveness of language-transfer learning in a very low resource scenario when the target language is not included in the original multi-lingual training data. Interestingly, we found multilingual features superior to multilingual models, and this finding suggests that we can efficiently combine the benefits of the HMM system with the seq2seq system through these multilingual feature techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Martin Karafiát (7 papers)
  2. Murali Karthick Baskar (15 papers)
  3. Shinji Watanabe (416 papers)
  4. Takaaki Hori (41 papers)
  5. Matthew Wiesner (32 papers)
  6. Jan "Honza'' Černocký (8 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.