Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Making the Most of BERT in Neural Machine Translation (1908.05672v5)

Published 15 Aug 2019 in cs.CL and cs.LG

Abstract: GPT-2 and BERT demonstrate the effectiveness of using pre-trained LLMs (LMs) on various natural language processing tasks. However, LM fine-tuning often suffers from catastrophic forgetting when applied to resource-rich tasks. In this work, we introduce a concerted training framework (CTNMT) that is the key to integrate the pre-trained LMs to neural machine translation (NMT). Our proposed CTNMT consists of three techniques: a) asymptotic distillation to ensure that the NMT model can retain the previous pre-trained knowledge; b) a dynamic switching gate to avoid catastrophic forgetting of pre-trained knowledge; and c) a strategy to adjust the learning paces according to a scheduled policy. Our experiments in machine translation show CTNMT gains of up to 3 BLEU score on the WMT14 English-German language pair which even surpasses the previous state-of-the-art pre-training aided NMT by 1.4 BLEU score. While for the large WMT14 English-French task with 40 millions of sentence-pairs, our base model still significantly improves upon the state-of-the-art Transformer big model by more than 1 BLEU score. The code and model can be downloaded from https://github.com/bytedance/neurst/ tree/master/examples/ctnmt.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jiacheng Yang (11 papers)
  2. Mingxuan Wang (83 papers)
  3. Hao Zhou (351 papers)
  4. Chengqi Zhao (15 papers)
  5. Yong Yu (219 papers)
  6. Weinan Zhang (322 papers)
  7. Lei Li (1293 papers)
Citations (150)