Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Coverage Embedding Models for Neural Machine Translation (1605.03148v2)

Published 10 May 2016 in cs.CL

Abstract: In this paper, we enhance the attention-based neural machine translation (NMT) by adding explicit coverage embedding models to alleviate issues of repeating and dropping translations in NMT. For each source word, our model starts with a full coverage embedding vector to track the coverage status, and then keeps updating it with neural networks as the translation goes. Experiments on the large-scale Chinese-to-English task show that our enhanced model improves the translation quality significantly on various test sets over the strong large vocabulary NMT system.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Haitao Mi (56 papers)
  2. Baskaran Sankaran (5 papers)
  3. Zhiguo Wang (100 papers)
  4. Abe Ittycheriah (9 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.