Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sentence Boundary Augmentation For Neural Machine Translation Robustness (2010.11132v1)

Published 21 Oct 2020 in cs.CL, cs.LG, cs.SD, and eess.AS

Abstract: Neural Machine Translation (NMT) models have demonstrated strong state of the art performance on translation tasks where well-formed training and evaluation data are provided, but they remain sensitive to inputs that include errors of various types. Specifically, in the context of long-form speech translation systems, where the input transcripts come from Automatic Speech Recognition (ASR), the NMT models have to handle errors including phoneme substitutions, grammatical structure, and sentence boundaries, all of which pose challenges to NMT robustness. Through in-depth error analysis, we show that sentence boundary segmentation has the largest impact on quality, and we develop a simple data augmentation strategy to improve segmentation robustness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Daniel Li (42 papers)
  2. Te I (3 papers)
  3. Naveen Arivazhagan (15 papers)
  4. Colin Cherry (38 papers)
  5. Dirk Padfield (7 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.