2000 character limit reached
Training Deeper Neural Machine Translation Models with Transparent Attention (1808.07561v2)
Published 22 Aug 2018 in cs.CL, cs.AI, and cs.LG
Abstract: While current state-of-the-art NMT models, such as RNN seq2seq and Transformers, possess a large number of parameters, they are still shallow in comparison to convolutional models used for both text and vision applications. In this work we attempt to train significantly (2-3x) deeper Transformer and Bi-RNN encoders for machine translation. We propose a simple modification to the attention mechanism that eases the optimization of deeper models, and results in consistent gains of 0.7-1.1 BLEU on the benchmark WMT'14 English-German and WMT'15 Czech-English tasks for both architectures.
- Ankur Bapna (53 papers)
- Mia Xu Chen (8 papers)
- Orhan Firat (80 papers)
- Yuan Cao (201 papers)
- Yonghui Wu (115 papers)