2000 character limit reached
Using Perturbed Length-aware Positional Encoding for Non-autoregressive Neural Machine Translation (2107.13689v1)
Published 29 Jul 2021 in cs.CL
Abstract: Non-autoregressive neural machine translation (NAT) usually employs sequence-level knowledge distillation using autoregressive neural machine translation (AT) as its teacher model. However, a NAT model often outputs shorter sentences than an AT model. In this work, we propose sequence-level knowledge distillation (SKD) using perturbed length-aware positional encoding and apply it to a student model, the Levenshtein Transformer. Our method outperformed a standard Levenshtein Transformer by 2.5 points in bilingual evaluation understudy (BLEU) at maximum in a WMT14 German to English translation. The NAT model output longer sentences than the baseline NAT models.
- Yui Oka (2 papers)
- Katsuhito Sudoh (35 papers)
- Satoshi Nakamura (94 papers)