"Listen, Understand and Translate": Triple Supervision Decouples End-to-end Speech-to-text Translation (2009.09704v3)
Abstract: An end-to-end speech-to-text translation (ST) takes audio in a source language and outputs the text in a target language. Existing methods are limited by the amount of parallel corpus. Can we build a system to fully utilize signals in a parallel ST corpus? We are inspired by human understanding system which is composed of auditory perception and cognitive processing. In this paper, we propose Listen-Understand-Translate, (LUT), a unified framework with triple supervision signals to decouple the end-to-end speech-to-text translation task. LUT is able to guide the acoustic encoder to extract as much information from the auditory input. In addition, LUT utilizes a pre-trained BERT model to enforce the upper encoder to produce as much semantic information as possible, without extra data. We perform experiments on a diverse set of speech translation benchmarks, including Librispeech English-French, IWSLT English-German and TED English-Chinese. Our results demonstrate LUT achieves the state-of-the-art performance, outperforming previous methods. The code is available at https://github.com/dqqcasia/st.
- Qianqian Dong (19 papers)
- Rong Ye (20 papers)
- Mingxuan Wang (83 papers)
- Hao Zhou (351 papers)
- Shuang Xu (59 papers)
- Bo Xu (212 papers)
- Lei Li (1293 papers)