Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Average Token Delay: A Latency Metric for Simultaneous Translation (2211.13173v2)

Published 22 Nov 2022 in cs.CL and cs.SD

Abstract: Simultaneous translation is a task in which translation begins before the speaker has finished speaking. In its evaluation, we have to consider the latency of the translation in addition to the quality. The latency is preferably as small as possible for users to comprehend what the speaker says with a small delay. Existing latency metrics focus on when the translation starts but do not consider adequately when the translation ends. This means such metrics do not penalize the latency caused by a long translation output, which actually delays users' comprehension. In this work, we propose a novel latency evaluation metric called Average Token Delay (ATD) that focuses on the end timings of partial translations in simultaneous translation. We discuss the advantage of ATD using simulated examples and also investigate the differences between ATD and Average Lagging with simultaneous translation experiments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yasumasa Kano (5 papers)
  2. Katsuhito Sudoh (35 papers)
  3. Satoshi Nakamura (94 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.