Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-layer Representation Fusion for Neural Machine Translation (2002.06714v1)

Published 16 Feb 2020 in cs.CL

Abstract: Neural machine translation systems require a number of stacked layers for deep models. But the prediction depends on the sentence representation of the top-most layer with no access to low-level representations. This makes it more difficult to train the model and poses a risk of information loss to prediction. In this paper, we propose a multi-layer representation fusion (MLRF) approach to fusing stacked layers. In particular, we design three fusion functions to learn a better representation from the stack. Experimental results show that our approach yields improvements of 0.92 and 0.56 BLEU points over the strong Transformer baseline on IWSLT German-English and NIST Chinese-English MT tasks respectively. The result is new state-of-the-art in German-English translation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Qiang Wang (271 papers)
  2. Fuxue Li (1 paper)
  3. Tong Xiao (119 papers)
  4. Yanyang Li (22 papers)
  5. Yinqiao Li (7 papers)
  6. Jingbo Zhu (79 papers)
Citations (50)

Summary

We haven't generated a summary for this paper yet.