Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NAIST Simultaneous Speech Translation System for IWSLT 2024 (2407.00826v1)

Published 30 Jun 2024 in cs.CL, cs.SD, and eess.AS

Abstract: This paper describes NAIST's submission to the simultaneous track of the IWSLT 2024 Evaluation Campaign: English-to-{German, Japanese, Chinese} speech-to-text translation and English-to-Japanese speech-to-speech translation. We develop a multilingual end-to-end speech-to-text translation model combining two pre-trained LLMs, HuBERT and mBART. We trained this model with two decoding policies, Local Agreement (LA) and AlignAtt. The submitted models employ the LA policy because it outperformed the AlignAtt policy in previous models. Our speech-to-speech translation method is a cascade of the above speech-to-text model and an incremental text-to-speech (TTS) module that incorporates a phoneme estimation model, a parallel acoustic model, and a parallel WaveGAN vocoder. We improved our incremental TTS by applying the Transformer architecture with the AlignAtt policy for the estimation model. The results show that our upgraded TTS module contributed to improving the system performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Yuka Ko (5 papers)
  2. Ryo Fukuda (5 papers)
  3. Yuta Nishikawa (4 papers)
  4. Yasumasa Kano (5 papers)
  5. Tomoya Yanagita (3 papers)
  6. Kosuke Doi (4 papers)
  7. Mana Makinae (4 papers)
  8. Haotian Tan (3 papers)
  9. Makoto Sakai (2 papers)
  10. Sakriani Sakti (41 papers)
  11. Katsuhito Sudoh (35 papers)
  12. Satoshi Nakamura (94 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com