Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers (2101.00234v3)

Published 1 Jan 2021 in cs.CL and cs.LG

Abstract: Transformers have shown improved performance when compared to previous architectures for sequence processing such as RNNs. Despite their sizeable performance gains, as recently suggested, the model is computationally expensive to train and with a high parameter budget. In light of this, we explore parameter-sharing methods in Transformers with a specific focus on generative models. We perform an analysis of different parameter sharing/reduction methods and develop the Subformer. Our model combines sandwich-style parameter sharing, which overcomes naive cross-layer parameter sharing in generative models, and self-attentive embedding factorization (SAFE). Experiments on machine translation, abstractive summarization and LLMing show that the Subformer can outperform the Transformer even when using significantly fewer parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Machel Reid (20 papers)
  2. Edison Marrese-Taylor (29 papers)
  3. Yutaka Matsuo (128 papers)
Citations (48)
X Twitter Logo Streamline Icon: https://streamlinehq.com