Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Addressing the Vulnerability of NMT in Input Perturbations (2104.09810v1)

Published 20 Apr 2021 in cs.CL

Abstract: Neural Machine Translation (NMT) has achieved significant breakthrough in performance but is known to suffer vulnerability to input perturbations. As real input noise is difficult to predict during training, robustness is a big issue for system deployment. In this paper, we improve the robustness of NMT models by reducing the effect of noisy words through a Context-Enhanced Reconstruction (CER) approach. CER trains the model to resist noise in two steps: (1) perturbation step that breaks the naturalness of input sequence with made-up words; (2) reconstruction step that defends the noise propagation by generating better and more robust contextual representation. Experimental results on Chinese-English (ZH-EN) and French-English (FR-EN) translation tasks demonstrate robustness improvement on both news and social media text. Further fine-tuning experiments on social media text show our approach can converge at a higher position and provide a better adaptation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Weiwen Xu (19 papers)
  2. Ai Ti Aw (18 papers)
  3. Yang Ding (65 papers)
  4. Kui Wu (57 papers)
  5. Shafiq Joty (187 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.