2000 character limit reached
Towards Robust Neural Machine Translation (1805.06130v1)
Published 16 May 2018 in cs.CL
Abstract: Small perturbations in the input can severely distort intermediate representations and thus impact translation quality of neural machine translation (NMT) models. In this paper, we propose to improve the robustness of NMT models with adversarial stability training. The basic idea is to make both the encoder and decoder in NMT models robust against input perturbations by enabling them to behave similarly for the original input and its perturbed counterpart. Experimental results on Chinese-English, English-German and English-French translation tasks show that our approaches can not only achieve significant improvements over strong NMT systems but also improve the robustness of NMT models.
- Yong Cheng (58 papers)
- Zhaopeng Tu (135 papers)
- Fandong Meng (174 papers)
- Junjie Zhai (7 papers)
- Yang Liu (2253 papers)