Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Document-level Neural Machine Translation with Associated Memory Network (1910.14528v2)

Published 31 Oct 2019 in cs.CL

Abstract: Standard neural machine translation (NMT) is on the assumption that the document-level context is independent. Most existing document-level NMT approaches are satisfied with a smattering sense of global document-level information, while this work focuses on exploiting detailed document-level context in terms of a memory network. The capacity of the memory network that detecting the most relevant part of the current sentence from memory renders a natural solution to model the rich document-level context. In this work, the proposed document-aware memory network is implemented to enhance the Transformer NMT baseline. Experiments on several tasks show that the proposed method significantly improves the NMT performance over strong Transformer baselines and other related studies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Shu Jiang (18 papers)
  2. Rui Wang (996 papers)
  3. Zuchao Li (76 papers)
  4. Masao Utiyama (39 papers)
  5. Kehai Chen (59 papers)
  6. Eiichiro Sumita (31 papers)
  7. Hai Zhao (227 papers)
  8. Bao-Liang Lu (26 papers)