Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving the Robustness of Speech Translation (1811.00728v1)

Published 2 Nov 2018 in cs.CL

Abstract: Although neural machine translation (NMT) has achieved impressive progress recently, it is usually trained on the clean parallel data set and hence cannot work well when the input sentence is the production of the automatic speech recognition (ASR) system due to the enormous errors in the source. To solve this problem, we propose a simple but effective method to improve the robustness of NMT in the case of speech translation. We simulate the noise existing in the realistic output of the ASR system and inject them into the clean parallel data so that NMT can work under similar word distributions during training and testing. Besides, we also incorporate the Chinese Pinyin feature which is easy to get in speech translation to further improve the translation performance. Experiment results show that our method has a more stable performance and outperforms the baseline by an average of 3.12 BLEU on multiple noisy test sets, even while achieves a generalization improvement on the WMT'17 Chinese-English test set.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xiang Li (1003 papers)
  2. Haiyang Xue (1 paper)
  3. Wei Chen (1290 papers)
  4. Yang Liu (2253 papers)
  5. Yang Feng (230 papers)
  6. Qun Liu (230 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.