Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pretraining by Backtranslation for End-to-end ASR in Low-Resource Settings (1812.03919v2)

Published 10 Dec 2018 in eess.AS, cs.CL, and cs.SD

Abstract: We explore training attention-based encoder-decoder ASR in low-resource settings. These models perform poorly when trained on small amounts of transcribed speech, in part because they depend on having sufficient target-side text to train the attention and decoder networks. In this paper we address this shortcoming by pretraining our network parameters using only text-based data and transcribed speech from other languages. We analyze the relative contributions of both sources of data. Across 3 test languages, our text-based approach resulted in a 20% average relative improvement over a text-based augmentation technique without pretraining. Using transcribed speech from nearby languages gives a further 20-30% relative reduction in character error rate.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Matthew Wiesner (32 papers)
  2. Adithya Renduchintala (17 papers)
  3. Shinji Watanabe (416 papers)
  4. Chunxi Liu (20 papers)
  5. Najim Dehak (71 papers)
  6. Sanjeev Khudanpur (74 papers)
Citations (32)

Summary

We haven't generated a summary for this paper yet.