Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input (1812.09664v1)

Published 23 Dec 2018 in cs.CL and cs.LG

Abstract: Non-autoregressive translation (NAT) models, which remove the dependence on previous target tokens from the inputs of the decoder, achieve significantly inference speedup but at the cost of inferior accuracy compared to autoregressive translation (AT) models. Previous work shows that the quality of the inputs of the decoder is important and largely impacts the model accuracy. In this paper, we propose two methods to enhance the decoder inputs so as to improve NAT models. The first one directly leverages a phrase table generated by conventional SMT approaches to translate source tokens to target tokens, which are then fed into the decoder as inputs. The second one transforms source-side word embeddings to target-side word embeddings through sentence-level alignment and word-level adversary learning, and then feeds the transformed word embeddings into the decoder as inputs. Experimental results show our method largely outperforms the NAT baseline~\citep{gu2017non} by $5.11$ BLEU scores on WMT14 English-German task and $4.72$ BLEU scores on WMT16 English-Romanian task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Junliang Guo (39 papers)
  2. Xu Tan (164 papers)
  3. Di He (108 papers)
  4. Tao Qin (201 papers)
  5. Linli Xu (33 papers)
  6. Tie-Yan Liu (242 papers)
Citations (124)

Summary

We haven't generated a summary for this paper yet.