Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Weakly Supervised Data to Improve End-to-End Speech-to-Text Translation (1811.02050v2)

Published 5 Nov 2018 in cs.CL, cs.LG, cs.SD, and eess.AS

Abstract: End-to-end Speech Translation (ST) models have many potential advantages when compared to the cascade of Automatic Speech Recognition (ASR) and text Machine Translation (MT) models, including lowered inference latency and the avoidance of error compounding. However, the quality of end-to-end ST is often limited by a paucity of training data, since it is difficult to collect large parallel corpora of speech and translated transcript pairs. Previous studies have proposed the use of pre-trained components and multi-task learning in order to benefit from weakly supervised training data, such as speech-to-transcript or text-to-foreign-text pairs. In this paper, we demonstrate that using pre-trained MT or text-to-speech (TTS) synthesis models to convert weakly supervised data into speech-to-translation pairs for ST training can be more effective than multi-task learning. Furthermore, we demonstrate that a high quality end-to-end ST model can be trained using only weakly supervised datasets, and that synthetic data sourced from unlabeled monolingual text or speech can be used to improve performance. Finally, we discuss methods for avoiding overfitting to synthetic speech with a quantitative ablation study.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Ye Jia (33 papers)
  2. Melvin Johnson (35 papers)
  3. Wolfgang Macherey (23 papers)
  4. Ron J. Weiss (30 papers)
  5. Yuan Cao (201 papers)
  6. Chung-Cheng Chiu (48 papers)
  7. Naveen Ari (2 papers)
  8. Stella Laurenzo (5 papers)
  9. Yonghui Wu (115 papers)
Citations (158)