Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tagged End-to-End Simultaneous Speech Translation Training using Simultaneous Interpretation Data (2306.08582v1)

Published 14 Jun 2023 in cs.CL, cs.SD, and eess.AS

Abstract: Simultaneous speech translation (SimulST) translates partial speech inputs incrementally. Although the monotonic correspondence between input and output is preferable for smaller latency, it is not the case for distant language pairs such as English and Japanese. A prospective approach to this problem is to mimic simultaneous interpretation (SI) using SI data to train a SimulST model. However, the size of such SI data is limited, so the SI data should be used together with ordinary bilingual data whose translations are given in offline. In this paper, we propose an effective way to train a SimulST model using mixed data of SI and offline. The proposed method trains a single model using the mixed data with style tags that tell the model to generate SI- or offline-style outputs. Experiment results show improvements of BLEURT in different latency ranges, and our analyses revealed the proposed model generates SI-style outputs more than the baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yuka Ko (5 papers)
  2. Ryo Fukuda (5 papers)
  3. Yuta Nishikawa (4 papers)
  4. Yasumasa Kano (5 papers)
  5. Katsuhito Sudoh (35 papers)
  6. Satoshi Nakamura (94 papers)
Citations (5)