Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The HW-TSC's Offline Speech Translation Systems for IWSLT 2021 Evaluation (2108.03845v1)

Published 9 Aug 2021 in cs.CL

Abstract: This paper describes our work in participation of the IWSLT-2021 offline speech translation task. Our system was built in a cascade form, including a speaker diarization module, an Automatic Speech Recognition (ASR) module and a Machine Translation (MT) module. We directly use the LIUM SpkDiarization tool as the diarization module. The ASR module is trained with three ASR datasets from different sources, by multi-source training, using a modified Transformer encoder. The MT module is pretrained on the large-scale WMT news translation dataset and fine-tuned on the TED corpus. Our method achieves 24.6 BLEU score on the 2021 test set.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Minghan Wang (23 papers)
  2. Yuxia Wang (41 papers)
  3. Chang Su (37 papers)
  4. Jiaxin Guo (40 papers)
  5. Yingtao Zhang (19 papers)
  6. Yujia Liu (27 papers)
  7. Min Zhang (630 papers)
  8. Shimin Tao (31 papers)
  9. Xingshan Zeng (38 papers)
  10. Liangyou Li (36 papers)
  11. Hao Yang (328 papers)
  12. Ying Qin (51 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.