Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Machine Translation with Key-Value Memory-Augmented Attention (1806.11249v1)

Published 29 Jun 2018 in cs.CL

Abstract: Although attention-based Neural Machine Translation (NMT) has achieved remarkable progress in recent years, it still suffers from issues of repeating and dropping translations. To alleviate these issues, we propose a novel key-value memory-augmented attention model for NMT, called KVMEMATT. Specifically, we maintain a timely updated keymemory to keep track of attention history and a fixed value-memory to store the representation of source sentence throughout the whole translation process. Via nontrivial transformations and iterative interactions between the two memories, the decoder focuses on more appropriate source word(s) for predicting the next target word at each decoding step, therefore can improve the adequacy of translations. Experimental results on Chinese=>English and WMT17 German<=>English translation tasks demonstrate the superiority of the proposed model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Fandong Meng (174 papers)
  2. Zhaopeng Tu (135 papers)
  3. Yong Cheng (58 papers)
  4. Haiyang Wu (11 papers)
  5. Junjie Zhai (7 papers)
  6. Yuekui Yang (10 papers)
  7. Di Wang (407 papers)
Citations (20)