Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Assessing the Tolerance of Neural Machine Translation Systems Against Speech Recognition Errors (1904.10997v1)

Published 24 Apr 2019 in cs.CL

Abstract: Machine translation systems are conventionally trained on textual resources that do not model phenomena that occur in spoken language. While the evaluation of neural machine translation systems on textual inputs is actively researched in the literature , little has been discovered about the complexities of translating spoken language data with neural models. We introduce and motivate interesting problems one faces when considering the translation of automatic speech recognition (ASR) outputs on neural machine translation (NMT) systems. We test the robustness of sentence encoding approaches for NMT encoder-decoder modeling, focusing on word-based over byte-pair encoding. We compare the translation of utterances containing ASR errors in state-of-the-art NMT encoder-decoder systems against a strong phrase-based machine translation baseline in order to better understand which phenomena present in ASR outputs are better represented under the NMT framework than approaches that represent translation as a linear model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Nicholas Ruiz (4 papers)
  2. Mattia Antonino Di Gangi (11 papers)
  3. Nicola Bertoldi (2 papers)
  4. Marcello Federico (38 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.