Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Lightweight Transformer via Group-wise Transformation for Vision-and-Language Tasks (2204.07780v1)

Published 16 Apr 2022 in cs.CV

Abstract: Despite the exciting performance, Transformer is criticized for its excessive parameters and computation cost. However, compressing Transformer remains as an open problem due to its internal complexity of the layer designs, i.e., Multi-Head Attention (MHA) and Feed-Forward Network (FFN). To address this issue, we introduce Group-wise Transformation towards a universal yet lightweight Transformer for vision-and-language tasks, termed as LW-Transformer. LW-Transformer applies Group-wise Transformation to reduce both the parameters and computations of Transformer, while also preserving its two main properties, i.e., the efficient attention modeling on diverse subspaces of MHA, and the expanding-scaling feature transformation of FFN. We apply LW-Transformer to a set of Transformer-based networks, and quantitatively measure them on three vision-and-language tasks and six benchmark datasets. Experimental results show that while saving a large number of parameters and computations, LW-Transformer achieves very competitive performance against the original Transformer networks for vision-and-language tasks. To examine the generalization ability, we also apply our optimization strategy to a recently proposed image Transformer called Swin-Transformer for image classification, where the effectiveness can be also confirmed

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Gen Luo (32 papers)
  2. Yiyi Zhou (38 papers)
  3. Xiaoshuai Sun (91 papers)
  4. Yan Wang (734 papers)
  5. Liujuan Cao (73 papers)
  6. Yongjian Wu (46 papers)
  7. Feiyue Huang (76 papers)
  8. Rongrong Ji (315 papers)
Citations (40)

Summary

We haven't generated a summary for this paper yet.