Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GTrans: Grouping and Fusing Transformer Layers for Neural Machine Translation (2207.14467v2)

Published 29 Jul 2022 in cs.CL and cs.LG

Abstract: Transformer structure, stacked by a sequence of encoder and decoder network layers, achieves significant development in neural machine translation. However, vanilla Transformer mainly exploits the top-layer representation, assuming the lower layers provide trivial or redundant information and thus ignoring the bottom-layer feature that is potentially valuable. In this work, we propose the Group-Transformer model (GTrans) that flexibly divides multi-layer representations of both encoder and decoder into different groups and then fuses these group features to generate target words. To corroborate the effectiveness of the proposed method, extensive experiments and analytic experiments are conducted on three bilingual translation benchmarks and two multilingual translation tasks, including the IWLST-14, IWLST-17, LDC, WMT-14 and OPUS-100 benchmark. Experimental and analytical results demonstrate that our model outperforms its Transformer counterparts by a consistent gain. Furthermore, it can be successfully scaled up to 60 encoder layers and 36 decoder layers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jian Yang (505 papers)
  2. Yuwei Yin (21 papers)
  3. Liqun Yang (18 papers)
  4. Shuming Ma (83 papers)
  5. Haoyang Huang (27 papers)
  6. Dongdong Zhang (79 papers)
  7. Furu Wei (291 papers)
  8. Zhoujun Li (122 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.