Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unified Speech-Text Pre-training for Speech Translation and Recognition (2204.05409v1)

Published 11 Apr 2022 in cs.CL

Abstract: We describe a method to jointly pre-train speech and text in an encoder-decoder modeling framework for speech translation and recognition. The proposed method incorporates four self-supervised and supervised subtasks for cross modality learning. A self-supervised speech subtask leverages unlabelled speech data, and a (self-)supervised text to text subtask makes use of abundant text training data. Two auxiliary supervised speech tasks are included to unify speech and text modeling space. Our contribution lies in integrating linguistic information from the text corpus into the speech pre-training. Detailed analysis reveals learning interference among subtasks. Two pre-training configurations for speech translation and recognition, respectively, are presented to alleviate subtask interference. Our experiments show the proposed method can effectively fuse speech and text information into one model. It achieves between 1.7 and 2.3 BLEU improvement above the state of the art on the MuST-C speech translation dataset and comparable WERs to wav2vec 2.0 on the Librispeech speech recognition task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yun Tang (42 papers)
  2. Hongyu Gong (44 papers)
  3. Ning Dong (15 papers)
  4. Changhan Wang (46 papers)
  5. Wei-Ning Hsu (76 papers)
  6. Jiatao Gu (83 papers)
  7. Alexei Baevski (39 papers)
  8. Xian Li (115 papers)
  9. Abdelrahman Mohamed (59 papers)
  10. Michael Auli (73 papers)
  11. Juan Pino (50 papers)
Citations (80)