Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Assessing Evaluation Metrics for Speech-to-Speech Translation (2110.13877v1)

Published 26 Oct 2021 in cs.CL, cs.SD, and eess.AS

Abstract: Speech-to-speech translation combines machine translation with speech synthesis, introducing evaluation challenges not present in either task alone. How to automatically evaluate speech-to-speech translation is an open question which has not previously been explored. Translating to speech rather than to text is often motivated by unwritten languages or languages without standardized orthographies. However, we show that the previously used automatic metric for this task is best equipped for standardized high-resource languages only. In this work, we first evaluate current metrics for speech-to-speech translation, and second assess how translation to dialectal variants rather than to standardized languages impacts various evaluation methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Elizabeth Salesky (27 papers)
  2. Julian Mäder (2 papers)
  3. Severin Klinger (1 paper)
Citations (13)