Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Shallow-to-Deep Training for Neural Machine Translation (2010.03737v1)

Published 8 Oct 2020 in cs.CL

Abstract: Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but training an extremely deep encoder is time consuming. Moreover, why deep models help NMT is an open question. In this paper, we investigate the behavior of a well-tuned deep Transformer system. We find that stacking layers is helpful in improving the representation ability of NMT models and adjacent layers perform similarly. This inspires us to develop a shallow-to-deep training method that learns deep models by stacking shallow models. In this way, we successfully train a Transformer system with a 54-layer encoder. Experimental results on WMT'16 English-German and WMT'14 English-French translation tasks show that it is $1.4$ $\times$ faster than training from scratch, and achieves a BLEU score of $30.33$ and $43.29$ on two tasks. The code is publicly available at https://github.com/libeineu/SDT-Training/.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Bei Li (51 papers)
  2. Ziyang Wang (59 papers)
  3. Hui Liu (481 papers)
  4. Yufan Jiang (17 papers)
  5. Quan Du (8 papers)
  6. Tong Xiao (119 papers)
  7. Huizhen Wang (3 papers)
  8. Jingbo Zhu (79 papers)
Citations (48)